00:00:00.001 Started by upstream project "autotest-nightly" build number 4313 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3676 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.141 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.182 Using shallow fetch with depth 1 00:00:00.182 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.182 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.856 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.872 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.886 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.886 > git config core.sparsecheckout # timeout=10 00:00:03.898 > git read-tree -mu HEAD # timeout=10 00:00:03.915 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.942 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.942 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.025 [Pipeline] Start of Pipeline 00:00:04.039 [Pipeline] library 00:00:04.041 Loading library shm_lib@master 00:00:04.041 Library shm_lib@master is cached. Copying from home. 00:00:04.063 [Pipeline] node 00:00:04.074 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.076 [Pipeline] { 00:00:04.086 [Pipeline] catchError 00:00:04.087 [Pipeline] { 00:00:04.096 [Pipeline] wrap 00:00:04.104 [Pipeline] { 00:00:04.110 [Pipeline] stage 00:00:04.111 [Pipeline] { (Prologue) 00:00:04.355 [Pipeline] sh 00:00:04.638 + logger -p user.info -t JENKINS-CI 00:00:04.659 [Pipeline] echo 00:00:04.661 Node: GP11 00:00:04.669 [Pipeline] sh 00:00:04.972 [Pipeline] setCustomBuildProperty 00:00:04.986 [Pipeline] echo 00:00:04.987 Cleanup processes 00:00:04.991 [Pipeline] sh 00:00:05.273 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.273 2248639 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.286 [Pipeline] sh 00:00:05.571 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.571 ++ grep -v 'sudo pgrep' 00:00:05.571 ++ awk '{print $1}' 00:00:05.571 + sudo kill -9 00:00:05.571 + true 00:00:05.585 [Pipeline] cleanWs 00:00:05.594 [WS-CLEANUP] Deleting project workspace... 00:00:05.594 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.601 [WS-CLEANUP] done 00:00:05.606 [Pipeline] setCustomBuildProperty 00:00:05.617 [Pipeline] sh 00:00:05.896 + sudo git config --global --replace-all safe.directory '*' 00:00:06.001 [Pipeline] httpRequest 00:00:06.374 [Pipeline] echo 00:00:06.376 Sorcerer 10.211.164.20 is alive 00:00:06.384 [Pipeline] retry 00:00:06.385 [Pipeline] { 00:00:06.398 [Pipeline] httpRequest 00:00:06.403 HttpMethod: GET 00:00:06.403 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.404 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.423 Response Code: HTTP/1.1 200 OK 00:00:06.424 Success: Status code 200 is in the accepted range: 200,404 00:00:06.424 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.574 [Pipeline] } 00:00:08.593 [Pipeline] // retry 00:00:08.601 [Pipeline] sh 00:00:08.885 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.898 [Pipeline] httpRequest 00:00:09.211 [Pipeline] echo 00:00:09.212 Sorcerer 10.211.164.20 is alive 00:00:09.222 [Pipeline] retry 00:00:09.224 [Pipeline] { 00:00:09.241 [Pipeline] httpRequest 00:00:09.246 HttpMethod: GET 00:00:09.246 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:09.247 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:09.262 Response Code: HTTP/1.1 200 OK 00:00:09.262 Success: Status code 200 is in the accepted range: 200,404 00:00:09.263 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:54.409 [Pipeline] } 00:00:54.426 [Pipeline] // retry 00:00:54.434 [Pipeline] sh 00:00:54.723 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:58.027 [Pipeline] sh 00:00:58.316 + git -C spdk log --oneline -n5 00:00:58.316 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:58.316 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:58.316 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:58.316 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:00:58.316 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:58.329 [Pipeline] } 00:00:58.343 [Pipeline] // stage 00:00:58.353 [Pipeline] stage 00:00:58.356 [Pipeline] { (Prepare) 00:00:58.373 [Pipeline] writeFile 00:00:58.389 [Pipeline] sh 00:00:58.682 + logger -p user.info -t JENKINS-CI 00:00:58.693 [Pipeline] sh 00:00:58.973 + logger -p user.info -t JENKINS-CI 00:00:58.987 [Pipeline] sh 00:00:59.276 + cat autorun-spdk.conf 00:00:59.276 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.276 SPDK_TEST_NVMF=1 00:00:59.276 SPDK_TEST_NVME_CLI=1 00:00:59.276 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.276 SPDK_TEST_NVMF_NICS=e810 00:00:59.276 SPDK_RUN_ASAN=1 00:00:59.276 SPDK_RUN_UBSAN=1 00:00:59.276 NET_TYPE=phy 00:00:59.285 RUN_NIGHTLY=1 00:00:59.291 [Pipeline] readFile 00:00:59.318 [Pipeline] withEnv 00:00:59.320 [Pipeline] { 00:00:59.332 [Pipeline] sh 00:00:59.620 + set -ex 00:00:59.621 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.621 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.621 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.621 ++ SPDK_TEST_NVMF=1 00:00:59.621 ++ SPDK_TEST_NVME_CLI=1 00:00:59.621 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.621 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.621 ++ SPDK_RUN_ASAN=1 00:00:59.621 ++ SPDK_RUN_UBSAN=1 00:00:59.621 ++ NET_TYPE=phy 00:00:59.621 ++ RUN_NIGHTLY=1 00:00:59.621 + case $SPDK_TEST_NVMF_NICS in 00:00:59.621 + DRIVERS=ice 00:00:59.621 + [[ tcp == \r\d\m\a ]] 00:00:59.621 + [[ -n ice ]] 00:00:59.621 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.621 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.621 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.621 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.621 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.621 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.621 + true 00:00:59.621 + for D in $DRIVERS 00:00:59.621 + sudo modprobe ice 00:00:59.621 + exit 0 00:00:59.631 [Pipeline] } 00:00:59.647 [Pipeline] // withEnv 00:00:59.652 [Pipeline] } 00:00:59.666 [Pipeline] // stage 00:00:59.677 [Pipeline] catchError 00:00:59.679 [Pipeline] { 00:00:59.694 [Pipeline] timeout 00:00:59.694 Timeout set to expire in 1 hr 0 min 00:00:59.696 [Pipeline] { 00:00:59.711 [Pipeline] stage 00:00:59.714 [Pipeline] { (Tests) 00:00:59.729 [Pipeline] sh 00:01:00.015 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.015 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.015 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.015 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.015 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.015 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.015 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.015 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.015 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.015 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.015 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.015 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.015 + source /etc/os-release 00:01:00.015 ++ NAME='Fedora Linux' 00:01:00.015 ++ VERSION='39 (Cloud Edition)' 00:01:00.015 ++ ID=fedora 00:01:00.015 ++ VERSION_ID=39 00:01:00.015 ++ VERSION_CODENAME= 00:01:00.015 ++ PLATFORM_ID=platform:f39 00:01:00.015 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:00.015 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.015 ++ LOGO=fedora-logo-icon 00:01:00.015 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:00.015 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.015 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:00.015 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.015 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.015 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.015 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:00.015 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.015 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:00.015 ++ SUPPORT_END=2024-11-12 00:01:00.015 ++ VARIANT='Cloud Edition' 00:01:00.015 ++ VARIANT_ID=cloud 00:01:00.015 + uname -a 00:01:00.015 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:00.015 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.954 Hugepages 00:01:00.954 node hugesize free / total 00:01:00.954 node0 1048576kB 0 / 0 00:01:00.954 node0 2048kB 0 / 0 00:01:00.954 node1 1048576kB 0 / 0 00:01:00.954 node1 2048kB 0 / 0 00:01:00.954 00:01:00.954 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.954 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:00.954 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:00.955 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:00.955 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.213 + rm -f /tmp/spdk-ld-path 00:01:01.213 + source autorun-spdk.conf 00:01:01.213 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.213 ++ SPDK_TEST_NVMF=1 00:01:01.213 ++ SPDK_TEST_NVME_CLI=1 00:01:01.213 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.213 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.213 ++ SPDK_RUN_ASAN=1 00:01:01.213 ++ SPDK_RUN_UBSAN=1 00:01:01.213 ++ NET_TYPE=phy 00:01:01.213 ++ RUN_NIGHTLY=1 00:01:01.213 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.213 + [[ -n '' ]] 00:01:01.213 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.213 + for M in /var/spdk/build-*-manifest.txt 00:01:01.213 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:01.213 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.213 + for M in /var/spdk/build-*-manifest.txt 00:01:01.213 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.213 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.213 + for M in /var/spdk/build-*-manifest.txt 00:01:01.213 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.214 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.214 ++ uname 00:01:01.214 + [[ Linux == \L\i\n\u\x ]] 00:01:01.214 + sudo dmesg -T 00:01:01.214 + sudo dmesg --clear 00:01:01.214 + dmesg_pid=2249933 00:01:01.214 + [[ Fedora Linux == FreeBSD ]] 00:01:01.214 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.214 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.214 + sudo dmesg -Tw 00:01:01.214 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.214 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.214 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.214 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.214 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.214 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.214 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.214 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.214 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.214 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.214 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.214 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.214 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.214 18:03:13 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.214 18:03:13 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:01.214 18:03:13 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:01.214 18:03:13 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.214 18:03:13 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.214 18:03:13 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:01.214 18:03:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.214 18:03:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.214 18:03:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.214 18:03:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.214 18:03:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.214 18:03:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.214 18:03:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.214 18:03:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.214 18:03:13 -- paths/export.sh@5 -- $ export PATH 00:01:01.214 18:03:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.214 18:03:13 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.214 18:03:13 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:01.214 18:03:13 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732813393.XXXXXX 00:01:01.214 18:03:13 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732813393.5kDKDa 00:01:01.214 18:03:13 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:01.214 18:03:13 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:01.214 18:03:13 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.214 18:03:13 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.214 18:03:13 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.214 18:03:13 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:01.214 18:03:13 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:01.214 18:03:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.214 18:03:13 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:01.214 18:03:13 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:01.214 18:03:13 -- pm/common@17 -- $ local monitor 00:01:01.214 18:03:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.214 18:03:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.214 18:03:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.214 18:03:13 -- pm/common@21 -- $ date +%s 00:01:01.214 18:03:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.214 18:03:13 -- pm/common@21 -- $ date +%s 00:01:01.214 18:03:13 -- pm/common@25 -- $ sleep 1 00:01:01.214 18:03:13 -- pm/common@21 -- $ date +%s 00:01:01.214 18:03:13 -- pm/common@21 -- $ date +%s 00:01:01.214 18:03:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732813393 00:01:01.214 18:03:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732813393 00:01:01.214 18:03:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732813393 00:01:01.214 18:03:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732813393 00:01:01.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732813393_collect-cpu-temp.pm.log 00:01:01.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732813393_collect-vmstat.pm.log 00:01:01.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732813393_collect-cpu-load.pm.log 00:01:01.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732813393_collect-bmc-pm.bmc.pm.log 00:01:02.154 18:03:14 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:02.154 18:03:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.154 18:03:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.154 18:03:14 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.154 18:03:14 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.154 Thu Nov 28 05:03:14 PM UTC 2024 00:01:02.154 18:03:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.154 v25.01-pre-276-g35cd3e84d 00:01:02.154 18:03:14 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:02.154 18:03:14 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:02.154 18:03:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.154 18:03:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.154 18:03:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.412 ************************************ 00:01:02.412 START TEST asan 00:01:02.412 ************************************ 00:01:02.412 18:03:14 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:02.412 using asan 00:01:02.412 00:01:02.412 real 0m0.000s 00:01:02.412 user 0m0.000s 00:01:02.412 sys 0m0.000s 00:01:02.412 18:03:14 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.412 18:03:14 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.412 ************************************ 00:01:02.412 END TEST asan 00:01:02.412 ************************************ 00:01:02.412 18:03:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.412 18:03:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.412 18:03:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.412 18:03:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.412 18:03:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.412 ************************************ 00:01:02.412 START TEST ubsan 00:01:02.412 ************************************ 00:01:02.412 18:03:14 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:02.412 using ubsan 00:01:02.412 00:01:02.412 real 0m0.000s 00:01:02.412 user 0m0.000s 00:01:02.412 sys 0m0.000s 00:01:02.412 18:03:14 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.412 18:03:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.412 ************************************ 00:01:02.412 END TEST ubsan 00:01:02.412 ************************************ 00:01:02.412 18:03:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.412 18:03:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.412 18:03:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.412 18:03:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.412 18:03:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.412 18:03:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.412 18:03:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.413 18:03:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.413 18:03:14 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:02.413 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:02.413 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.673 Using 'verbs' RDMA provider 00:01:13.256 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:23.245 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:23.245 Creating mk/config.mk...done. 00:01:23.245 Creating mk/cc.flags.mk...done. 00:01:23.245 Type 'make' to build. 00:01:23.245 18:03:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:23.245 18:03:34 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.245 18:03:34 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.245 18:03:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.245 ************************************ 00:01:23.245 START TEST make 00:01:23.245 ************************************ 00:01:23.245 18:03:34 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:23.245 make[1]: Nothing to be done for 'all'. 00:01:33.266 The Meson build system 00:01:33.266 Version: 1.5.0 00:01:33.266 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:33.266 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:33.266 Build type: native build 00:01:33.266 Program cat found: YES (/usr/bin/cat) 00:01:33.266 Project name: DPDK 00:01:33.266 Project version: 24.03.0 00:01:33.266 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:33.266 C linker for the host machine: cc ld.bfd 2.40-14 00:01:33.266 Host machine cpu family: x86_64 00:01:33.266 Host machine cpu: x86_64 00:01:33.266 Message: ## Building in Developer Mode ## 00:01:33.266 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.266 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:33.266 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.266 Program python3 found: YES (/usr/bin/python3) 00:01:33.266 Program cat found: YES (/usr/bin/cat) 00:01:33.266 Compiler for C supports arguments -march=native: YES 00:01:33.266 Checking for size of "void *" : 8 00:01:33.266 Checking for size of "void *" : 8 (cached) 00:01:33.266 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:33.266 Library m found: YES 00:01:33.266 Library numa found: YES 00:01:33.266 Has header "numaif.h" : YES 00:01:33.266 Library fdt found: NO 00:01:33.266 Library execinfo found: NO 00:01:33.266 Has header "execinfo.h" : YES 00:01:33.266 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:33.266 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.266 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.266 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.266 Run-time dependency openssl found: YES 3.1.1 00:01:33.266 Run-time dependency libpcap found: YES 1.10.4 00:01:33.266 Has header "pcap.h" with dependency libpcap: YES 00:01:33.266 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.266 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.266 Compiler for C supports arguments -Wformat: YES 00:01:33.266 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.266 Compiler for C supports arguments -Wformat-security: NO 00:01:33.266 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.266 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.266 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.266 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.266 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.266 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.266 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.266 Compiler for C supports arguments -Wundef: YES 00:01:33.267 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.267 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.267 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.267 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.267 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.267 Program objdump found: YES (/usr/bin/objdump) 00:01:33.267 Compiler for C supports arguments -mavx512f: YES 00:01:33.267 Checking if "AVX512 checking" compiles: YES 00:01:33.267 Fetching value of define "__SSE4_2__" : 1 00:01:33.267 Fetching value of define "__AES__" : 1 00:01:33.267 Fetching value of define "__AVX__" : 1 00:01:33.267 Fetching value of define "__AVX2__" : (undefined) 00:01:33.267 Fetching value of define "__AVX512BW__" : (undefined) 00:01:33.267 Fetching value of define "__AVX512CD__" : (undefined) 00:01:33.267 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:33.267 Fetching value of define "__AVX512F__" : (undefined) 00:01:33.267 Fetching value of define "__AVX512VL__" : (undefined) 00:01:33.267 Fetching value of define "__PCLMUL__" : 1 00:01:33.267 Fetching value of define "__RDRND__" : 1 00:01:33.267 Fetching value of define "__RDSEED__" : (undefined) 00:01:33.267 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.267 Fetching value of define "__znver1__" : (undefined) 00:01:33.267 Fetching value of define "__znver2__" : (undefined) 00:01:33.267 Fetching value of define "__znver3__" : (undefined) 00:01:33.267 Fetching value of define "__znver4__" : (undefined) 00:01:33.267 Library asan found: YES 00:01:33.267 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.267 Message: lib/log: Defining dependency "log" 00:01:33.267 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.267 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.267 Library rt found: YES 00:01:33.267 Checking for function "getentropy" : NO 00:01:33.267 Message: lib/eal: Defining dependency "eal" 00:01:33.267 Message: lib/ring: Defining dependency "ring" 00:01:33.267 Message: lib/rcu: Defining dependency "rcu" 00:01:33.267 Message: lib/mempool: Defining dependency "mempool" 00:01:33.267 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.267 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.267 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.267 Compiler for C supports arguments -mpclmul: YES 00:01:33.267 Compiler for C supports arguments -maes: YES 00:01:33.267 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.267 Compiler for C supports arguments -mavx512bw: YES 00:01:33.267 Compiler for C supports arguments -mavx512dq: YES 00:01:33.267 Compiler for C supports arguments -mavx512vl: YES 00:01:33.267 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.267 Compiler for C supports arguments -mavx2: YES 00:01:33.267 Compiler for C supports arguments -mavx: YES 00:01:33.267 Message: lib/net: Defining dependency "net" 00:01:33.267 Message: lib/meter: Defining dependency "meter" 00:01:33.267 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.267 Message: lib/pci: Defining dependency "pci" 00:01:33.267 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.267 Message: lib/hash: Defining dependency "hash" 00:01:33.267 Message: lib/timer: Defining dependency "timer" 00:01:33.267 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.267 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.267 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.267 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.267 Message: lib/power: Defining dependency "power" 00:01:33.267 Message: lib/reorder: Defining dependency "reorder" 00:01:33.267 Message: lib/security: Defining dependency "security" 00:01:33.267 Has header "linux/userfaultfd.h" : YES 00:01:33.267 Has header "linux/vduse.h" : YES 00:01:33.267 Message: lib/vhost: Defining dependency "vhost" 00:01:33.267 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.267 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.267 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.267 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:33.267 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:33.267 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:33.267 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:33.267 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:33.267 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:33.267 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:33.267 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:33.267 Configuring doxy-api-html.conf using configuration 00:01:33.267 Configuring doxy-api-man.conf using configuration 00:01:33.267 Program mandb found: YES (/usr/bin/mandb) 00:01:33.267 Program sphinx-build found: NO 00:01:33.267 Configuring rte_build_config.h using configuration 00:01:33.267 Message: 00:01:33.267 ================= 00:01:33.267 Applications Enabled 00:01:33.267 ================= 00:01:33.267 00:01:33.267 apps: 00:01:33.267 00:01:33.267 00:01:33.267 Message: 00:01:33.267 ================= 00:01:33.267 Libraries Enabled 00:01:33.267 ================= 00:01:33.267 00:01:33.267 libs: 00:01:33.267 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:33.267 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:33.267 cryptodev, dmadev, power, reorder, security, vhost, 00:01:33.267 00:01:33.267 Message: 00:01:33.267 =============== 00:01:33.267 Drivers Enabled 00:01:33.267 =============== 00:01:33.267 00:01:33.267 common: 00:01:33.267 00:01:33.267 bus: 00:01:33.267 pci, vdev, 00:01:33.267 mempool: 00:01:33.267 ring, 00:01:33.267 dma: 00:01:33.267 00:01:33.267 net: 00:01:33.267 00:01:33.267 crypto: 00:01:33.267 00:01:33.267 compress: 00:01:33.267 00:01:33.267 vdpa: 00:01:33.267 00:01:33.267 00:01:33.267 Message: 00:01:33.267 ================= 00:01:33.267 Content Skipped 00:01:33.267 ================= 00:01:33.267 00:01:33.267 apps: 00:01:33.267 dumpcap: explicitly disabled via build config 00:01:33.267 graph: explicitly disabled via build config 00:01:33.267 pdump: explicitly disabled via build config 00:01:33.267 proc-info: explicitly disabled via build config 00:01:33.267 test-acl: explicitly disabled via build config 00:01:33.267 test-bbdev: explicitly disabled via build config 00:01:33.267 test-cmdline: explicitly disabled via build config 00:01:33.267 test-compress-perf: explicitly disabled via build config 00:01:33.267 test-crypto-perf: explicitly disabled via build config 00:01:33.267 test-dma-perf: explicitly disabled via build config 00:01:33.267 test-eventdev: explicitly disabled via build config 00:01:33.267 test-fib: explicitly disabled via build config 00:01:33.267 test-flow-perf: explicitly disabled via build config 00:01:33.267 test-gpudev: explicitly disabled via build config 00:01:33.267 test-mldev: explicitly disabled via build config 00:01:33.267 test-pipeline: explicitly disabled via build config 00:01:33.267 test-pmd: explicitly disabled via build config 00:01:33.267 test-regex: explicitly disabled via build config 00:01:33.267 test-sad: explicitly disabled via build config 00:01:33.267 test-security-perf: explicitly disabled via build config 00:01:33.267 00:01:33.267 libs: 00:01:33.267 argparse: explicitly disabled via build config 00:01:33.267 metrics: explicitly disabled via build config 00:01:33.267 acl: explicitly disabled via build config 00:01:33.267 bbdev: explicitly disabled via build config 00:01:33.267 bitratestats: explicitly disabled via build config 00:01:33.267 bpf: explicitly disabled via build config 00:01:33.267 cfgfile: explicitly disabled via build config 00:01:33.267 distributor: explicitly disabled via build config 00:01:33.267 efd: explicitly disabled via build config 00:01:33.267 eventdev: explicitly disabled via build config 00:01:33.267 dispatcher: explicitly disabled via build config 00:01:33.267 gpudev: explicitly disabled via build config 00:01:33.267 gro: explicitly disabled via build config 00:01:33.267 gso: explicitly disabled via build config 00:01:33.267 ip_frag: explicitly disabled via build config 00:01:33.267 jobstats: explicitly disabled via build config 00:01:33.267 latencystats: explicitly disabled via build config 00:01:33.267 lpm: explicitly disabled via build config 00:01:33.267 member: explicitly disabled via build config 00:01:33.267 pcapng: explicitly disabled via build config 00:01:33.267 rawdev: explicitly disabled via build config 00:01:33.267 regexdev: explicitly disabled via build config 00:01:33.267 mldev: explicitly disabled via build config 00:01:33.267 rib: explicitly disabled via build config 00:01:33.267 sched: explicitly disabled via build config 00:01:33.267 stack: explicitly disabled via build config 00:01:33.267 ipsec: explicitly disabled via build config 00:01:33.267 pdcp: explicitly disabled via build config 00:01:33.267 fib: explicitly disabled via build config 00:01:33.267 port: explicitly disabled via build config 00:01:33.267 pdump: explicitly disabled via build config 00:01:33.267 table: explicitly disabled via build config 00:01:33.267 pipeline: explicitly disabled via build config 00:01:33.267 graph: explicitly disabled via build config 00:01:33.267 node: explicitly disabled via build config 00:01:33.267 00:01:33.267 drivers: 00:01:33.267 common/cpt: not in enabled drivers build config 00:01:33.267 common/dpaax: not in enabled drivers build config 00:01:33.267 common/iavf: not in enabled drivers build config 00:01:33.267 common/idpf: not in enabled drivers build config 00:01:33.267 common/ionic: not in enabled drivers build config 00:01:33.267 common/mvep: not in enabled drivers build config 00:01:33.267 common/octeontx: not in enabled drivers build config 00:01:33.267 bus/auxiliary: not in enabled drivers build config 00:01:33.267 bus/cdx: not in enabled drivers build config 00:01:33.267 bus/dpaa: not in enabled drivers build config 00:01:33.267 bus/fslmc: not in enabled drivers build config 00:01:33.267 bus/ifpga: not in enabled drivers build config 00:01:33.267 bus/platform: not in enabled drivers build config 00:01:33.267 bus/uacce: not in enabled drivers build config 00:01:33.268 bus/vmbus: not in enabled drivers build config 00:01:33.268 common/cnxk: not in enabled drivers build config 00:01:33.268 common/mlx5: not in enabled drivers build config 00:01:33.268 common/nfp: not in enabled drivers build config 00:01:33.268 common/nitrox: not in enabled drivers build config 00:01:33.268 common/qat: not in enabled drivers build config 00:01:33.268 common/sfc_efx: not in enabled drivers build config 00:01:33.268 mempool/bucket: not in enabled drivers build config 00:01:33.268 mempool/cnxk: not in enabled drivers build config 00:01:33.268 mempool/dpaa: not in enabled drivers build config 00:01:33.268 mempool/dpaa2: not in enabled drivers build config 00:01:33.268 mempool/octeontx: not in enabled drivers build config 00:01:33.268 mempool/stack: not in enabled drivers build config 00:01:33.268 dma/cnxk: not in enabled drivers build config 00:01:33.268 dma/dpaa: not in enabled drivers build config 00:01:33.268 dma/dpaa2: not in enabled drivers build config 00:01:33.268 dma/hisilicon: not in enabled drivers build config 00:01:33.268 dma/idxd: not in enabled drivers build config 00:01:33.268 dma/ioat: not in enabled drivers build config 00:01:33.268 dma/skeleton: not in enabled drivers build config 00:01:33.268 net/af_packet: not in enabled drivers build config 00:01:33.268 net/af_xdp: not in enabled drivers build config 00:01:33.268 net/ark: not in enabled drivers build config 00:01:33.268 net/atlantic: not in enabled drivers build config 00:01:33.268 net/avp: not in enabled drivers build config 00:01:33.268 net/axgbe: not in enabled drivers build config 00:01:33.268 net/bnx2x: not in enabled drivers build config 00:01:33.268 net/bnxt: not in enabled drivers build config 00:01:33.268 net/bonding: not in enabled drivers build config 00:01:33.268 net/cnxk: not in enabled drivers build config 00:01:33.268 net/cpfl: not in enabled drivers build config 00:01:33.268 net/cxgbe: not in enabled drivers build config 00:01:33.268 net/dpaa: not in enabled drivers build config 00:01:33.268 net/dpaa2: not in enabled drivers build config 00:01:33.268 net/e1000: not in enabled drivers build config 00:01:33.268 net/ena: not in enabled drivers build config 00:01:33.268 net/enetc: not in enabled drivers build config 00:01:33.268 net/enetfec: not in enabled drivers build config 00:01:33.268 net/enic: not in enabled drivers build config 00:01:33.268 net/failsafe: not in enabled drivers build config 00:01:33.268 net/fm10k: not in enabled drivers build config 00:01:33.268 net/gve: not in enabled drivers build config 00:01:33.268 net/hinic: not in enabled drivers build config 00:01:33.268 net/hns3: not in enabled drivers build config 00:01:33.268 net/i40e: not in enabled drivers build config 00:01:33.268 net/iavf: not in enabled drivers build config 00:01:33.268 net/ice: not in enabled drivers build config 00:01:33.268 net/idpf: not in enabled drivers build config 00:01:33.268 net/igc: not in enabled drivers build config 00:01:33.268 net/ionic: not in enabled drivers build config 00:01:33.268 net/ipn3ke: not in enabled drivers build config 00:01:33.268 net/ixgbe: not in enabled drivers build config 00:01:33.268 net/mana: not in enabled drivers build config 00:01:33.268 net/memif: not in enabled drivers build config 00:01:33.268 net/mlx4: not in enabled drivers build config 00:01:33.268 net/mlx5: not in enabled drivers build config 00:01:33.268 net/mvneta: not in enabled drivers build config 00:01:33.268 net/mvpp2: not in enabled drivers build config 00:01:33.268 net/netvsc: not in enabled drivers build config 00:01:33.268 net/nfb: not in enabled drivers build config 00:01:33.268 net/nfp: not in enabled drivers build config 00:01:33.268 net/ngbe: not in enabled drivers build config 00:01:33.268 net/null: not in enabled drivers build config 00:01:33.268 net/octeontx: not in enabled drivers build config 00:01:33.268 net/octeon_ep: not in enabled drivers build config 00:01:33.268 net/pcap: not in enabled drivers build config 00:01:33.268 net/pfe: not in enabled drivers build config 00:01:33.268 net/qede: not in enabled drivers build config 00:01:33.268 net/ring: not in enabled drivers build config 00:01:33.268 net/sfc: not in enabled drivers build config 00:01:33.268 net/softnic: not in enabled drivers build config 00:01:33.268 net/tap: not in enabled drivers build config 00:01:33.268 net/thunderx: not in enabled drivers build config 00:01:33.268 net/txgbe: not in enabled drivers build config 00:01:33.268 net/vdev_netvsc: not in enabled drivers build config 00:01:33.268 net/vhost: not in enabled drivers build config 00:01:33.268 net/virtio: not in enabled drivers build config 00:01:33.268 net/vmxnet3: not in enabled drivers build config 00:01:33.268 raw/*: missing internal dependency, "rawdev" 00:01:33.268 crypto/armv8: not in enabled drivers build config 00:01:33.268 crypto/bcmfs: not in enabled drivers build config 00:01:33.268 crypto/caam_jr: not in enabled drivers build config 00:01:33.268 crypto/ccp: not in enabled drivers build config 00:01:33.268 crypto/cnxk: not in enabled drivers build config 00:01:33.268 crypto/dpaa_sec: not in enabled drivers build config 00:01:33.268 crypto/dpaa2_sec: not in enabled drivers build config 00:01:33.268 crypto/ipsec_mb: not in enabled drivers build config 00:01:33.268 crypto/mlx5: not in enabled drivers build config 00:01:33.268 crypto/mvsam: not in enabled drivers build config 00:01:33.268 crypto/nitrox: not in enabled drivers build config 00:01:33.268 crypto/null: not in enabled drivers build config 00:01:33.268 crypto/octeontx: not in enabled drivers build config 00:01:33.268 crypto/openssl: not in enabled drivers build config 00:01:33.268 crypto/scheduler: not in enabled drivers build config 00:01:33.268 crypto/uadk: not in enabled drivers build config 00:01:33.268 crypto/virtio: not in enabled drivers build config 00:01:33.268 compress/isal: not in enabled drivers build config 00:01:33.268 compress/mlx5: not in enabled drivers build config 00:01:33.268 compress/nitrox: not in enabled drivers build config 00:01:33.268 compress/octeontx: not in enabled drivers build config 00:01:33.268 compress/zlib: not in enabled drivers build config 00:01:33.268 regex/*: missing internal dependency, "regexdev" 00:01:33.268 ml/*: missing internal dependency, "mldev" 00:01:33.268 vdpa/ifc: not in enabled drivers build config 00:01:33.268 vdpa/mlx5: not in enabled drivers build config 00:01:33.268 vdpa/nfp: not in enabled drivers build config 00:01:33.268 vdpa/sfc: not in enabled drivers build config 00:01:33.268 event/*: missing internal dependency, "eventdev" 00:01:33.268 baseband/*: missing internal dependency, "bbdev" 00:01:33.268 gpu/*: missing internal dependency, "gpudev" 00:01:33.268 00:01:33.268 00:01:33.268 Build targets in project: 85 00:01:33.268 00:01:33.268 DPDK 24.03.0 00:01:33.268 00:01:33.268 User defined options 00:01:33.268 buildtype : debug 00:01:33.268 default_library : shared 00:01:33.268 libdir : lib 00:01:33.268 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:33.268 b_sanitize : address 00:01:33.268 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:33.268 c_link_args : 00:01:33.268 cpu_instruction_set: native 00:01:33.268 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:33.268 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:33.268 enable_docs : false 00:01:33.268 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:33.268 enable_kmods : false 00:01:33.268 max_lcores : 128 00:01:33.268 tests : false 00:01:33.268 00:01:33.268 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.268 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:33.268 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:33.268 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:33.268 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:33.268 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:33.268 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:33.268 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:33.268 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:33.268 [8/268] Linking static target lib/librte_kvargs.a 00:01:33.268 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:33.268 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:33.268 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:33.268 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:33.268 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:33.268 [14/268] Linking static target lib/librte_log.a 00:01:33.268 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:33.268 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.846 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.846 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:33.846 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:33.846 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:33.846 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:33.846 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:34.112 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:34.112 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:34.112 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:34.112 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:34.112 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:34.112 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:34.112 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:34.112 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:34.112 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:34.112 [32/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:34.112 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:34.112 [34/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:34.112 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:34.112 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:34.112 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:34.112 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:34.112 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:34.112 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:34.112 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:34.112 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:34.112 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:34.112 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:34.112 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:34.112 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.112 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:34.112 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:34.112 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:34.112 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:34.112 [51/268] Linking static target lib/librte_telemetry.a 00:01:34.112 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:34.112 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:34.371 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:34.371 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:34.371 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:34.371 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:34.371 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:34.372 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:34.372 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:34.372 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:34.372 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.372 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:34.372 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:34.372 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:34.372 [66/268] Linking target lib/librte_log.so.24.1 00:01:34.635 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:34.635 [68/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:34.898 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:34.898 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.898 [71/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:34.898 [72/268] Linking target lib/librte_kvargs.so.24.1 00:01:34.898 [73/268] Linking static target lib/librte_pci.a 00:01:34.898 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:34.898 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:34.898 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:34.898 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:34.898 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:34.898 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.165 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.165 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.165 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.165 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.165 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.165 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.165 [86/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.165 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.165 [88/268] Linking static target lib/librte_ring.a 00:01:35.165 [89/268] Linking static target lib/librte_meter.a 00:01:35.165 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.165 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.165 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.165 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.165 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.165 [95/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.165 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.165 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.165 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.165 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.165 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.165 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.165 [102/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.165 [103/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:35.165 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.165 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.165 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:35.165 [107/268] Linking target lib/librte_telemetry.so.24.1 00:01:35.429 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.429 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.429 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:35.429 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.429 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.429 [113/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.429 [114/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.429 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.429 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.429 [117/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.429 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.429 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.692 [120/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:35.692 [121/268] Linking static target lib/librte_mempool.a 00:01:35.692 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.692 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.692 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.692 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.692 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.692 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.692 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.692 [129/268] Linking static target lib/librte_rcu.a 00:01:35.692 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.692 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.956 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.956 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.956 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.956 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.956 [136/268] Linking static target lib/librte_cmdline.a 00:01:35.956 [137/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.956 [138/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.219 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:36.219 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:36.219 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:36.219 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.219 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.219 [144/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:36.219 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.219 [146/268] Linking static target lib/librte_eal.a 00:01:36.219 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.219 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.219 [149/268] Linking static target lib/librte_timer.a 00:01:36.219 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.219 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.479 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.479 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.479 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:36.479 [155/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.479 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:36.479 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.740 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.740 [159/268] Linking static target lib/librte_dmadev.a 00:01:36.740 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.740 [161/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.740 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.740 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.740 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:36.999 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:36.999 [166/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:36.999 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:36.999 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:36.999 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.999 [170/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:36.999 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.999 [172/268] Linking static target lib/librte_net.a 00:01:36.999 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.999 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.999 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.999 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:36.999 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.999 [178/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.999 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.999 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.258 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.258 [182/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.258 [183/268] Linking static target lib/librte_power.a 00:01:37.258 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.258 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.258 [186/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:37.258 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.258 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.258 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:37.258 [190/268] Linking static target drivers/librte_bus_vdev.a 00:01:37.258 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.518 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:37.518 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:37.518 [194/268] Linking static target lib/librte_hash.a 00:01:37.518 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:37.518 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:37.518 [197/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.518 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:37.518 [199/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.518 [200/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.518 [201/268] Linking static target drivers/librte_bus_pci.a 00:01:37.518 [202/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.518 [203/268] Linking static target lib/librte_compressdev.a 00:01:37.777 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:37.777 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.777 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.777 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:37.777 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.777 [209/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.777 [210/268] Linking static target lib/librte_reorder.a 00:01:37.777 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.035 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.035 [213/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.035 [214/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.035 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.600 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.600 [217/268] Linking static target lib/librte_security.a 00:01:38.858 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.858 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:39.424 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.424 [221/268] Linking static target lib/librte_mbuf.a 00:01:39.682 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.940 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:39.940 [224/268] Linking static target lib/librte_cryptodev.a 00:01:40.508 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.508 [226/268] Linking static target lib/librte_ethdev.a 00:01:40.766 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.141 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.141 [229/268] Linking target lib/librte_eal.so.24.1 00:01:42.400 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:42.400 [231/268] Linking target lib/librte_ring.so.24.1 00:01:42.400 [232/268] Linking target lib/librte_dmadev.so.24.1 00:01:42.400 [233/268] Linking target lib/librte_meter.so.24.1 00:01:42.400 [234/268] Linking target lib/librte_timer.so.24.1 00:01:42.400 [235/268] Linking target lib/librte_pci.so.24.1 00:01:42.400 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:42.400 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:42.400 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:42.400 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:42.400 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:42.400 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:42.400 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:42.659 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:42.659 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:42.659 [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:42.659 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:42.659 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:42.659 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:42.917 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:42.917 [250/268] Linking target lib/librte_reorder.so.24.1 00:01:42.917 [251/268] Linking target lib/librte_compressdev.so.24.1 00:01:42.917 [252/268] Linking target lib/librte_net.so.24.1 00:01:42.917 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:42.917 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:42.917 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:43.175 [256/268] Linking target lib/librte_security.so.24.1 00:01:43.175 [257/268] Linking target lib/librte_hash.so.24.1 00:01:43.175 [258/268] Linking target lib/librte_cmdline.so.24.1 00:01:43.175 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:43.741 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.118 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.118 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:45.118 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:45.118 [264/268] Linking target lib/librte_power.so.24.1 00:02:11.658 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.658 [266/268] Linking static target lib/librte_vhost.a 00:02:11.658 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.658 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:11.658 INFO: autodetecting backend as ninja 00:02:11.658 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:11.658 CC lib/ut_mock/mock.o 00:02:11.658 CC lib/ut/ut.o 00:02:11.658 CC lib/log/log.o 00:02:11.658 CC lib/log/log_flags.o 00:02:11.658 CC lib/log/log_deprecated.o 00:02:11.658 LIB libspdk_ut.a 00:02:11.658 LIB libspdk_ut_mock.a 00:02:11.658 LIB libspdk_log.a 00:02:11.658 SO libspdk_ut.so.2.0 00:02:11.658 SO libspdk_ut_mock.so.6.0 00:02:11.658 SO libspdk_log.so.7.1 00:02:11.658 SYMLINK libspdk_ut.so 00:02:11.658 SYMLINK libspdk_ut_mock.so 00:02:11.658 SYMLINK libspdk_log.so 00:02:11.658 CC lib/dma/dma.o 00:02:11.658 CC lib/util/base64.o 00:02:11.658 CXX lib/trace_parser/trace.o 00:02:11.658 CC lib/util/bit_array.o 00:02:11.658 CC lib/ioat/ioat.o 00:02:11.658 CC lib/util/cpuset.o 00:02:11.658 CC lib/util/crc16.o 00:02:11.658 CC lib/util/crc32.o 00:02:11.658 CC lib/util/crc32c.o 00:02:11.658 CC lib/util/crc32_ieee.o 00:02:11.658 CC lib/util/crc64.o 00:02:11.658 CC lib/util/dif.o 00:02:11.658 CC lib/util/fd.o 00:02:11.658 CC lib/util/fd_group.o 00:02:11.658 CC lib/util/file.o 00:02:11.658 CC lib/util/hexlify.o 00:02:11.658 CC lib/util/iov.o 00:02:11.658 CC lib/util/math.o 00:02:11.658 CC lib/util/net.o 00:02:11.658 CC lib/util/pipe.o 00:02:11.658 CC lib/util/strerror_tls.o 00:02:11.658 CC lib/util/string.o 00:02:11.658 CC lib/util/uuid.o 00:02:11.658 CC lib/util/zipf.o 00:02:11.658 CC lib/util/xor.o 00:02:11.658 CC lib/util/md5.o 00:02:11.658 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.658 CC lib/vfio_user/host/vfio_user.o 00:02:11.658 LIB libspdk_dma.a 00:02:11.658 SO libspdk_dma.so.5.0 00:02:11.658 SYMLINK libspdk_dma.so 00:02:11.917 LIB libspdk_ioat.a 00:02:11.917 SO libspdk_ioat.so.7.0 00:02:11.917 SYMLINK libspdk_ioat.so 00:02:11.917 LIB libspdk_vfio_user.a 00:02:11.917 SO libspdk_vfio_user.so.5.0 00:02:11.917 SYMLINK libspdk_vfio_user.so 00:02:12.251 LIB libspdk_util.a 00:02:12.549 SO libspdk_util.so.10.1 00:02:12.549 SYMLINK libspdk_util.so 00:02:12.549 CC lib/idxd/idxd.o 00:02:12.549 CC lib/vmd/vmd.o 00:02:12.549 CC lib/json/json_parse.o 00:02:12.549 CC lib/conf/conf.o 00:02:12.549 CC lib/rdma_utils/rdma_utils.o 00:02:12.549 CC lib/env_dpdk/env.o 00:02:12.549 CC lib/json/json_util.o 00:02:12.549 CC lib/vmd/led.o 00:02:12.549 CC lib/env_dpdk/memory.o 00:02:12.549 CC lib/idxd/idxd_user.o 00:02:12.549 CC lib/json/json_write.o 00:02:12.549 CC lib/idxd/idxd_kernel.o 00:02:12.549 CC lib/env_dpdk/pci.o 00:02:12.549 CC lib/env_dpdk/init.o 00:02:12.549 CC lib/env_dpdk/threads.o 00:02:12.550 CC lib/env_dpdk/pci_ioat.o 00:02:12.550 CC lib/env_dpdk/pci_virtio.o 00:02:12.550 CC lib/env_dpdk/pci_vmd.o 00:02:12.550 CC lib/env_dpdk/pci_idxd.o 00:02:12.550 CC lib/env_dpdk/pci_event.o 00:02:12.550 CC lib/env_dpdk/sigbus_handler.o 00:02:12.550 CC lib/env_dpdk/pci_dpdk.o 00:02:12.550 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.550 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.550 LIB libspdk_trace_parser.a 00:02:12.823 SO libspdk_trace_parser.so.6.0 00:02:12.824 SYMLINK libspdk_trace_parser.so 00:02:12.824 LIB libspdk_conf.a 00:02:12.824 SO libspdk_conf.so.6.0 00:02:13.082 LIB libspdk_rdma_utils.a 00:02:13.082 SYMLINK libspdk_conf.so 00:02:13.082 SO libspdk_rdma_utils.so.1.0 00:02:13.082 SYMLINK libspdk_rdma_utils.so 00:02:13.082 LIB libspdk_json.a 00:02:13.082 SO libspdk_json.so.6.0 00:02:13.082 SYMLINK libspdk_json.so 00:02:13.339 CC lib/rdma_provider/common.o 00:02:13.339 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:13.339 CC lib/jsonrpc/jsonrpc_server.o 00:02:13.339 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:13.339 CC lib/jsonrpc/jsonrpc_client.o 00:02:13.339 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:13.598 LIB libspdk_rdma_provider.a 00:02:13.598 LIB libspdk_idxd.a 00:02:13.598 SO libspdk_rdma_provider.so.7.0 00:02:13.598 SO libspdk_idxd.so.12.1 00:02:13.598 LIB libspdk_vmd.a 00:02:13.598 SO libspdk_vmd.so.6.0 00:02:13.598 SYMLINK libspdk_rdma_provider.so 00:02:13.598 SYMLINK libspdk_idxd.so 00:02:13.598 SYMLINK libspdk_vmd.so 00:02:13.598 LIB libspdk_jsonrpc.a 00:02:13.598 SO libspdk_jsonrpc.so.6.0 00:02:13.856 SYMLINK libspdk_jsonrpc.so 00:02:13.856 CC lib/rpc/rpc.o 00:02:14.114 LIB libspdk_rpc.a 00:02:14.114 SO libspdk_rpc.so.6.0 00:02:14.114 SYMLINK libspdk_rpc.so 00:02:14.372 CC lib/notify/notify.o 00:02:14.372 CC lib/notify/notify_rpc.o 00:02:14.372 CC lib/trace/trace.o 00:02:14.372 CC lib/trace/trace_flags.o 00:02:14.372 CC lib/keyring/keyring.o 00:02:14.372 CC lib/trace/trace_rpc.o 00:02:14.372 CC lib/keyring/keyring_rpc.o 00:02:14.630 LIB libspdk_notify.a 00:02:14.630 SO libspdk_notify.so.6.0 00:02:14.630 SYMLINK libspdk_notify.so 00:02:14.630 LIB libspdk_keyring.a 00:02:14.630 SO libspdk_keyring.so.2.0 00:02:14.630 LIB libspdk_trace.a 00:02:14.630 SO libspdk_trace.so.11.0 00:02:14.630 SYMLINK libspdk_keyring.so 00:02:14.888 SYMLINK libspdk_trace.so 00:02:14.888 CC lib/sock/sock.o 00:02:14.888 CC lib/sock/sock_rpc.o 00:02:14.888 CC lib/thread/thread.o 00:02:14.888 CC lib/thread/iobuf.o 00:02:15.453 LIB libspdk_sock.a 00:02:15.453 SO libspdk_sock.so.10.0 00:02:15.453 SYMLINK libspdk_sock.so 00:02:15.713 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:15.713 CC lib/nvme/nvme_ctrlr.o 00:02:15.713 CC lib/nvme/nvme_fabric.o 00:02:15.713 CC lib/nvme/nvme_ns_cmd.o 00:02:15.713 CC lib/nvme/nvme_ns.o 00:02:15.713 CC lib/nvme/nvme_pcie_common.o 00:02:15.713 CC lib/nvme/nvme_pcie.o 00:02:15.713 CC lib/nvme/nvme_qpair.o 00:02:15.713 CC lib/nvme/nvme.o 00:02:15.713 CC lib/nvme/nvme_quirks.o 00:02:15.713 CC lib/nvme/nvme_transport.o 00:02:15.713 CC lib/nvme/nvme_discovery.o 00:02:15.713 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.713 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.713 CC lib/nvme/nvme_tcp.o 00:02:15.713 LIB libspdk_env_dpdk.a 00:02:15.713 CC lib/nvme/nvme_io_msg.o 00:02:15.713 CC lib/nvme/nvme_opal.o 00:02:15.713 CC lib/nvme/nvme_poll_group.o 00:02:15.713 CC lib/nvme/nvme_zns.o 00:02:15.713 CC lib/nvme/nvme_stubs.o 00:02:15.713 CC lib/nvme/nvme_auth.o 00:02:15.713 CC lib/nvme/nvme_cuse.o 00:02:15.713 CC lib/nvme/nvme_rdma.o 00:02:15.713 SO libspdk_env_dpdk.so.15.1 00:02:15.972 SYMLINK libspdk_env_dpdk.so 00:02:16.908 LIB libspdk_thread.a 00:02:17.166 SO libspdk_thread.so.11.0 00:02:17.166 SYMLINK libspdk_thread.so 00:02:17.166 CC lib/blob/blobstore.o 00:02:17.166 CC lib/fsdev/fsdev.o 00:02:17.166 CC lib/init/json_config.o 00:02:17.166 CC lib/virtio/virtio.o 00:02:17.166 CC lib/blob/request.o 00:02:17.166 CC lib/fsdev/fsdev_io.o 00:02:17.166 CC lib/init/subsystem.o 00:02:17.166 CC lib/blob/zeroes.o 00:02:17.166 CC lib/virtio/virtio_vhost_user.o 00:02:17.166 CC lib/fsdev/fsdev_rpc.o 00:02:17.166 CC lib/init/subsystem_rpc.o 00:02:17.166 CC lib/accel/accel.o 00:02:17.166 CC lib/blob/blob_bs_dev.o 00:02:17.166 CC lib/virtio/virtio_vfio_user.o 00:02:17.166 CC lib/accel/accel_rpc.o 00:02:17.166 CC lib/init/rpc.o 00:02:17.166 CC lib/virtio/virtio_pci.o 00:02:17.166 CC lib/accel/accel_sw.o 00:02:17.732 LIB libspdk_init.a 00:02:17.732 SO libspdk_init.so.6.0 00:02:17.732 SYMLINK libspdk_init.so 00:02:17.732 LIB libspdk_virtio.a 00:02:17.732 SO libspdk_virtio.so.7.0 00:02:17.732 CC lib/event/app.o 00:02:17.732 CC lib/event/reactor.o 00:02:17.732 CC lib/event/log_rpc.o 00:02:17.732 CC lib/event/app_rpc.o 00:02:17.732 CC lib/event/scheduler_static.o 00:02:17.732 SYMLINK libspdk_virtio.so 00:02:18.298 LIB libspdk_fsdev.a 00:02:18.298 SO libspdk_fsdev.so.2.0 00:02:18.298 SYMLINK libspdk_fsdev.so 00:02:18.298 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:18.557 LIB libspdk_event.a 00:02:18.557 SO libspdk_event.so.14.0 00:02:18.557 SYMLINK libspdk_event.so 00:02:18.816 LIB libspdk_nvme.a 00:02:18.816 LIB libspdk_accel.a 00:02:18.816 SO libspdk_accel.so.16.0 00:02:18.816 SO libspdk_nvme.so.15.0 00:02:18.816 SYMLINK libspdk_accel.so 00:02:19.074 CC lib/bdev/bdev.o 00:02:19.074 CC lib/bdev/bdev_rpc.o 00:02:19.074 CC lib/bdev/bdev_zone.o 00:02:19.074 CC lib/bdev/part.o 00:02:19.074 CC lib/bdev/scsi_nvme.o 00:02:19.074 SYMLINK libspdk_nvme.so 00:02:19.332 LIB libspdk_fuse_dispatcher.a 00:02:19.332 SO libspdk_fuse_dispatcher.so.1.0 00:02:19.332 SYMLINK libspdk_fuse_dispatcher.so 00:02:21.863 LIB libspdk_blob.a 00:02:21.863 SO libspdk_blob.so.12.0 00:02:21.863 SYMLINK libspdk_blob.so 00:02:21.863 CC lib/blobfs/blobfs.o 00:02:21.863 CC lib/blobfs/tree.o 00:02:21.863 CC lib/lvol/lvol.o 00:02:22.798 LIB libspdk_bdev.a 00:02:22.798 SO libspdk_bdev.so.17.0 00:02:22.798 SYMLINK libspdk_bdev.so 00:02:23.063 CC lib/ublk/ublk.o 00:02:23.063 CC lib/scsi/dev.o 00:02:23.063 CC lib/ublk/ublk_rpc.o 00:02:23.063 CC lib/nvmf/ctrlr.o 00:02:23.063 CC lib/nbd/nbd.o 00:02:23.063 CC lib/scsi/lun.o 00:02:23.063 CC lib/nvmf/ctrlr_discovery.o 00:02:23.063 CC lib/ftl/ftl_core.o 00:02:23.063 CC lib/nbd/nbd_rpc.o 00:02:23.063 CC lib/scsi/port.o 00:02:23.063 CC lib/ftl/ftl_init.o 00:02:23.063 CC lib/nvmf/ctrlr_bdev.o 00:02:23.063 CC lib/nvmf/subsystem.o 00:02:23.063 CC lib/scsi/scsi.o 00:02:23.063 CC lib/ftl/ftl_layout.o 00:02:23.063 CC lib/nvmf/nvmf.o 00:02:23.063 CC lib/scsi/scsi_bdev.o 00:02:23.063 CC lib/ftl/ftl_debug.o 00:02:23.063 CC lib/nvmf/nvmf_rpc.o 00:02:23.063 CC lib/scsi/scsi_pr.o 00:02:23.063 CC lib/nvmf/transport.o 00:02:23.063 CC lib/ftl/ftl_io.o 00:02:23.063 CC lib/scsi/scsi_rpc.o 00:02:23.063 CC lib/ftl/ftl_sb.o 00:02:23.063 CC lib/nvmf/tcp.o 00:02:23.063 CC lib/ftl/ftl_l2p.o 00:02:23.063 CC lib/nvmf/stubs.o 00:02:23.063 CC lib/scsi/task.o 00:02:23.063 CC lib/ftl/ftl_l2p_flat.o 00:02:23.063 CC lib/ftl/ftl_nv_cache.o 00:02:23.063 CC lib/ftl/ftl_band.o 00:02:23.063 CC lib/ftl/ftl_band_ops.o 00:02:23.063 CC lib/nvmf/mdns_server.o 00:02:23.063 CC lib/ftl/ftl_writer.o 00:02:23.063 CC lib/nvmf/rdma.o 00:02:23.063 CC lib/nvmf/auth.o 00:02:23.063 CC lib/ftl/ftl_rq.o 00:02:23.063 CC lib/ftl/ftl_reloc.o 00:02:23.063 CC lib/ftl/ftl_l2p_cache.o 00:02:23.063 CC lib/ftl/ftl_p2l.o 00:02:23.063 CC lib/ftl/ftl_p2l_log.o 00:02:23.063 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.063 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.063 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.063 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.063 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.063 LIB libspdk_blobfs.a 00:02:23.063 SO libspdk_blobfs.so.11.0 00:02:23.321 SYMLINK libspdk_blobfs.so 00:02:23.321 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.321 LIB libspdk_lvol.a 00:02:23.321 SO libspdk_lvol.so.11.0 00:02:23.321 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.321 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.321 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.321 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.321 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.581 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.581 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.581 CC lib/ftl/utils/ftl_conf.o 00:02:23.581 SYMLINK libspdk_lvol.so 00:02:23.581 CC lib/ftl/utils/ftl_md.o 00:02:23.581 CC lib/ftl/utils/ftl_mempool.o 00:02:23.581 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.581 CC lib/ftl/utils/ftl_property.o 00:02:23.581 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.581 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.581 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.581 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.581 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.581 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.581 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:23.842 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.842 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.842 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.842 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.842 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:23.842 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:23.842 CC lib/ftl/base/ftl_base_dev.o 00:02:23.842 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.842 CC lib/ftl/ftl_trace.o 00:02:24.102 LIB libspdk_nbd.a 00:02:24.102 SO libspdk_nbd.so.7.0 00:02:24.102 SYMLINK libspdk_nbd.so 00:02:24.102 LIB libspdk_scsi.a 00:02:24.361 SO libspdk_scsi.so.9.0 00:02:24.361 LIB libspdk_ublk.a 00:02:24.361 SO libspdk_ublk.so.3.0 00:02:24.361 SYMLINK libspdk_scsi.so 00:02:24.361 SYMLINK libspdk_ublk.so 00:02:24.620 CC lib/iscsi/conn.o 00:02:24.620 CC lib/vhost/vhost.o 00:02:24.620 CC lib/iscsi/init_grp.o 00:02:24.620 CC lib/vhost/vhost_rpc.o 00:02:24.620 CC lib/iscsi/iscsi.o 00:02:24.620 CC lib/vhost/vhost_scsi.o 00:02:24.620 CC lib/iscsi/param.o 00:02:24.620 CC lib/vhost/vhost_blk.o 00:02:24.620 CC lib/iscsi/portal_grp.o 00:02:24.620 CC lib/vhost/rte_vhost_user.o 00:02:24.620 CC lib/iscsi/tgt_node.o 00:02:24.620 CC lib/iscsi/iscsi_subsystem.o 00:02:24.620 CC lib/iscsi/iscsi_rpc.o 00:02:24.620 CC lib/iscsi/task.o 00:02:24.878 LIB libspdk_ftl.a 00:02:25.138 SO libspdk_ftl.so.9.0 00:02:25.396 SYMLINK libspdk_ftl.so 00:02:25.963 LIB libspdk_vhost.a 00:02:25.963 SO libspdk_vhost.so.8.0 00:02:25.963 SYMLINK libspdk_vhost.so 00:02:26.530 LIB libspdk_iscsi.a 00:02:26.530 SO libspdk_iscsi.so.8.0 00:02:26.530 LIB libspdk_nvmf.a 00:02:26.530 SYMLINK libspdk_iscsi.so 00:02:26.530 SO libspdk_nvmf.so.20.0 00:02:26.788 SYMLINK libspdk_nvmf.so 00:02:27.047 CC module/env_dpdk/env_dpdk_rpc.o 00:02:27.305 CC module/accel/error/accel_error.o 00:02:27.305 CC module/accel/error/accel_error_rpc.o 00:02:27.305 CC module/sock/posix/posix.o 00:02:27.305 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:27.305 CC module/keyring/linux/keyring.o 00:02:27.305 CC module/accel/dsa/accel_dsa.o 00:02:27.305 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:27.305 CC module/blob/bdev/blob_bdev.o 00:02:27.305 CC module/accel/ioat/accel_ioat.o 00:02:27.305 CC module/accel/ioat/accel_ioat_rpc.o 00:02:27.305 CC module/accel/dsa/accel_dsa_rpc.o 00:02:27.305 CC module/keyring/file/keyring.o 00:02:27.305 CC module/accel/iaa/accel_iaa.o 00:02:27.305 CC module/scheduler/gscheduler/gscheduler.o 00:02:27.305 CC module/fsdev/aio/fsdev_aio.o 00:02:27.305 CC module/keyring/linux/keyring_rpc.o 00:02:27.305 CC module/keyring/file/keyring_rpc.o 00:02:27.305 CC module/accel/iaa/accel_iaa_rpc.o 00:02:27.305 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:27.305 CC module/fsdev/aio/linux_aio_mgr.o 00:02:27.305 LIB libspdk_env_dpdk_rpc.a 00:02:27.305 SO libspdk_env_dpdk_rpc.so.6.0 00:02:27.305 SYMLINK libspdk_env_dpdk_rpc.so 00:02:27.305 LIB libspdk_keyring_linux.a 00:02:27.305 LIB libspdk_scheduler_gscheduler.a 00:02:27.305 LIB libspdk_keyring_file.a 00:02:27.305 LIB libspdk_scheduler_dpdk_governor.a 00:02:27.305 SO libspdk_keyring_linux.so.1.0 00:02:27.305 SO libspdk_scheduler_gscheduler.so.4.0 00:02:27.305 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:27.305 SO libspdk_keyring_file.so.2.0 00:02:27.564 LIB libspdk_accel_ioat.a 00:02:27.564 LIB libspdk_scheduler_dynamic.a 00:02:27.564 SYMLINK libspdk_keyring_linux.so 00:02:27.564 LIB libspdk_accel_error.a 00:02:27.564 SYMLINK libspdk_scheduler_gscheduler.so 00:02:27.564 LIB libspdk_accel_iaa.a 00:02:27.564 SO libspdk_accel_ioat.so.6.0 00:02:27.564 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:27.564 SYMLINK libspdk_keyring_file.so 00:02:27.564 SO libspdk_scheduler_dynamic.so.4.0 00:02:27.564 SO libspdk_accel_error.so.2.0 00:02:27.564 SO libspdk_accel_iaa.so.3.0 00:02:27.564 SYMLINK libspdk_accel_ioat.so 00:02:27.564 SYMLINK libspdk_scheduler_dynamic.so 00:02:27.564 SYMLINK libspdk_accel_error.so 00:02:27.564 SYMLINK libspdk_accel_iaa.so 00:02:27.564 LIB libspdk_blob_bdev.a 00:02:27.564 LIB libspdk_accel_dsa.a 00:02:27.564 SO libspdk_blob_bdev.so.12.0 00:02:27.564 SO libspdk_accel_dsa.so.5.0 00:02:27.564 SYMLINK libspdk_blob_bdev.so 00:02:27.564 SYMLINK libspdk_accel_dsa.so 00:02:27.823 CC module/bdev/split/vbdev_split.o 00:02:27.823 CC module/bdev/gpt/gpt.o 00:02:27.823 CC module/bdev/null/bdev_null.o 00:02:27.823 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.823 CC module/bdev/null/bdev_null_rpc.o 00:02:27.823 CC module/bdev/aio/bdev_aio.o 00:02:27.823 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.823 CC module/bdev/raid/bdev_raid.o 00:02:27.823 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.823 CC module/bdev/malloc/bdev_malloc.o 00:02:27.823 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.823 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.823 CC module/bdev/delay/vbdev_delay.o 00:02:27.823 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.823 CC module/bdev/error/vbdev_error.o 00:02:27.823 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.823 CC module/bdev/ftl/bdev_ftl.o 00:02:27.823 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.823 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.823 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.823 CC module/bdev/nvme/bdev_nvme.o 00:02:27.823 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.823 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.823 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.823 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.823 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.823 CC module/bdev/nvme/nvme_rpc.o 00:02:27.823 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.823 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.823 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.823 CC module/bdev/nvme/vbdev_opal.o 00:02:27.823 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.823 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.823 CC module/bdev/raid/raid0.o 00:02:27.823 CC module/bdev/raid/raid1.o 00:02:27.823 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.823 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.823 CC module/bdev/raid/concat.o 00:02:27.823 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.823 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.823 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.823 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:28.396 LIB libspdk_blobfs_bdev.a 00:02:28.396 SO libspdk_blobfs_bdev.so.6.0 00:02:28.396 LIB libspdk_fsdev_aio.a 00:02:28.396 SO libspdk_fsdev_aio.so.1.0 00:02:28.396 LIB libspdk_bdev_error.a 00:02:28.396 LIB libspdk_bdev_split.a 00:02:28.396 LIB libspdk_sock_posix.a 00:02:28.396 SYMLINK libspdk_blobfs_bdev.so 00:02:28.396 SO libspdk_bdev_error.so.6.0 00:02:28.396 SO libspdk_bdev_split.so.6.0 00:02:28.396 LIB libspdk_bdev_null.a 00:02:28.396 SO libspdk_sock_posix.so.6.0 00:02:28.396 SYMLINK libspdk_fsdev_aio.so 00:02:28.397 SO libspdk_bdev_null.so.6.0 00:02:28.397 LIB libspdk_bdev_gpt.a 00:02:28.397 SO libspdk_bdev_gpt.so.6.0 00:02:28.397 SYMLINK libspdk_bdev_split.so 00:02:28.397 SYMLINK libspdk_bdev_error.so 00:02:28.397 LIB libspdk_bdev_passthru.a 00:02:28.397 SYMLINK libspdk_sock_posix.so 00:02:28.397 SYMLINK libspdk_bdev_null.so 00:02:28.397 LIB libspdk_bdev_aio.a 00:02:28.397 SO libspdk_bdev_passthru.so.6.0 00:02:28.655 SYMLINK libspdk_bdev_gpt.so 00:02:28.655 SO libspdk_bdev_aio.so.6.0 00:02:28.655 LIB libspdk_bdev_ftl.a 00:02:28.655 SYMLINK libspdk_bdev_passthru.so 00:02:28.655 LIB libspdk_bdev_delay.a 00:02:28.655 LIB libspdk_bdev_zone_block.a 00:02:28.655 LIB libspdk_bdev_iscsi.a 00:02:28.655 SO libspdk_bdev_ftl.so.6.0 00:02:28.655 SO libspdk_bdev_zone_block.so.6.0 00:02:28.655 SO libspdk_bdev_delay.so.6.0 00:02:28.655 SO libspdk_bdev_iscsi.so.6.0 00:02:28.655 SYMLINK libspdk_bdev_aio.so 00:02:28.655 LIB libspdk_bdev_malloc.a 00:02:28.655 SYMLINK libspdk_bdev_ftl.so 00:02:28.655 SO libspdk_bdev_malloc.so.6.0 00:02:28.655 SYMLINK libspdk_bdev_zone_block.so 00:02:28.655 SYMLINK libspdk_bdev_delay.so 00:02:28.655 SYMLINK libspdk_bdev_iscsi.so 00:02:28.655 SYMLINK libspdk_bdev_malloc.so 00:02:28.914 LIB libspdk_bdev_virtio.a 00:02:28.914 LIB libspdk_bdev_lvol.a 00:02:28.914 SO libspdk_bdev_virtio.so.6.0 00:02:28.914 SO libspdk_bdev_lvol.so.6.0 00:02:28.914 SYMLINK libspdk_bdev_virtio.so 00:02:28.914 SYMLINK libspdk_bdev_lvol.so 00:02:29.482 LIB libspdk_bdev_raid.a 00:02:29.482 SO libspdk_bdev_raid.so.6.0 00:02:29.482 SYMLINK libspdk_bdev_raid.so 00:02:31.385 LIB libspdk_bdev_nvme.a 00:02:31.385 SO libspdk_bdev_nvme.so.7.1 00:02:31.644 SYMLINK libspdk_bdev_nvme.so 00:02:31.902 CC module/event/subsystems/vmd/vmd.o 00:02:31.902 CC module/event/subsystems/keyring/keyring.o 00:02:31.902 CC module/event/subsystems/iobuf/iobuf.o 00:02:31.902 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:31.902 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:31.902 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:31.902 CC module/event/subsystems/sock/sock.o 00:02:31.902 CC module/event/subsystems/scheduler/scheduler.o 00:02:31.902 CC module/event/subsystems/fsdev/fsdev.o 00:02:32.160 LIB libspdk_event_keyring.a 00:02:32.160 LIB libspdk_event_vhost_blk.a 00:02:32.160 LIB libspdk_event_fsdev.a 00:02:32.160 LIB libspdk_event_scheduler.a 00:02:32.160 LIB libspdk_event_vmd.a 00:02:32.160 LIB libspdk_event_sock.a 00:02:32.160 SO libspdk_event_keyring.so.1.0 00:02:32.160 LIB libspdk_event_iobuf.a 00:02:32.160 SO libspdk_event_vhost_blk.so.3.0 00:02:32.160 SO libspdk_event_fsdev.so.1.0 00:02:32.160 SO libspdk_event_scheduler.so.4.0 00:02:32.160 SO libspdk_event_sock.so.5.0 00:02:32.160 SO libspdk_event_vmd.so.6.0 00:02:32.160 SO libspdk_event_iobuf.so.3.0 00:02:32.160 SYMLINK libspdk_event_keyring.so 00:02:32.160 SYMLINK libspdk_event_fsdev.so 00:02:32.160 SYMLINK libspdk_event_vhost_blk.so 00:02:32.160 SYMLINK libspdk_event_scheduler.so 00:02:32.160 SYMLINK libspdk_event_sock.so 00:02:32.160 SYMLINK libspdk_event_vmd.so 00:02:32.160 SYMLINK libspdk_event_iobuf.so 00:02:32.418 CC module/event/subsystems/accel/accel.o 00:02:32.419 LIB libspdk_event_accel.a 00:02:32.419 SO libspdk_event_accel.so.6.0 00:02:32.677 SYMLINK libspdk_event_accel.so 00:02:32.677 CC module/event/subsystems/bdev/bdev.o 00:02:32.935 LIB libspdk_event_bdev.a 00:02:32.935 SO libspdk_event_bdev.so.6.0 00:02:32.935 SYMLINK libspdk_event_bdev.so 00:02:33.193 CC module/event/subsystems/ublk/ublk.o 00:02:33.193 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:33.193 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:33.193 CC module/event/subsystems/nbd/nbd.o 00:02:33.193 CC module/event/subsystems/scsi/scsi.o 00:02:33.193 LIB libspdk_event_nbd.a 00:02:33.193 LIB libspdk_event_ublk.a 00:02:33.450 LIB libspdk_event_scsi.a 00:02:33.450 SO libspdk_event_ublk.so.3.0 00:02:33.450 SO libspdk_event_nbd.so.6.0 00:02:33.450 SO libspdk_event_scsi.so.6.0 00:02:33.450 SYMLINK libspdk_event_ublk.so 00:02:33.450 SYMLINK libspdk_event_nbd.so 00:02:33.450 SYMLINK libspdk_event_scsi.so 00:02:33.450 LIB libspdk_event_nvmf.a 00:02:33.450 SO libspdk_event_nvmf.so.6.0 00:02:33.450 SYMLINK libspdk_event_nvmf.so 00:02:33.450 CC module/event/subsystems/iscsi/iscsi.o 00:02:33.708 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:33.708 LIB libspdk_event_vhost_scsi.a 00:02:33.708 LIB libspdk_event_iscsi.a 00:02:33.708 SO libspdk_event_vhost_scsi.so.3.0 00:02:33.708 SO libspdk_event_iscsi.so.6.0 00:02:33.708 SYMLINK libspdk_event_vhost_scsi.so 00:02:33.967 SYMLINK libspdk_event_iscsi.so 00:02:33.967 SO libspdk.so.6.0 00:02:33.967 SYMLINK libspdk.so 00:02:34.234 CXX app/trace/trace.o 00:02:34.234 TEST_HEADER include/spdk/accel.h 00:02:34.234 CC app/trace_record/trace_record.o 00:02:34.234 CC test/rpc_client/rpc_client_test.o 00:02:34.234 CC app/spdk_nvme_discover/discovery_aer.o 00:02:34.234 TEST_HEADER include/spdk/accel_module.h 00:02:34.234 CC app/spdk_top/spdk_top.o 00:02:34.234 TEST_HEADER include/spdk/barrier.h 00:02:34.234 TEST_HEADER include/spdk/assert.h 00:02:34.234 CC app/spdk_nvme_identify/identify.o 00:02:34.234 TEST_HEADER include/spdk/base64.h 00:02:34.234 TEST_HEADER include/spdk/bdev.h 00:02:34.234 TEST_HEADER include/spdk/bdev_module.h 00:02:34.234 CC app/spdk_lspci/spdk_lspci.o 00:02:34.234 CC app/spdk_nvme_perf/perf.o 00:02:34.234 TEST_HEADER include/spdk/bdev_zone.h 00:02:34.234 TEST_HEADER include/spdk/bit_array.h 00:02:34.234 TEST_HEADER include/spdk/bit_pool.h 00:02:34.234 TEST_HEADER include/spdk/blob_bdev.h 00:02:34.234 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:34.234 TEST_HEADER include/spdk/blobfs.h 00:02:34.234 TEST_HEADER include/spdk/blob.h 00:02:34.234 TEST_HEADER include/spdk/conf.h 00:02:34.234 TEST_HEADER include/spdk/config.h 00:02:34.234 TEST_HEADER include/spdk/cpuset.h 00:02:34.234 TEST_HEADER include/spdk/crc16.h 00:02:34.234 TEST_HEADER include/spdk/crc32.h 00:02:34.234 TEST_HEADER include/spdk/crc64.h 00:02:34.234 TEST_HEADER include/spdk/dif.h 00:02:34.234 TEST_HEADER include/spdk/dma.h 00:02:34.234 TEST_HEADER include/spdk/endian.h 00:02:34.234 TEST_HEADER include/spdk/env_dpdk.h 00:02:34.234 TEST_HEADER include/spdk/env.h 00:02:34.234 TEST_HEADER include/spdk/event.h 00:02:34.234 TEST_HEADER include/spdk/fd_group.h 00:02:34.234 TEST_HEADER include/spdk/fd.h 00:02:34.234 TEST_HEADER include/spdk/file.h 00:02:34.234 TEST_HEADER include/spdk/fsdev.h 00:02:34.234 TEST_HEADER include/spdk/fsdev_module.h 00:02:34.234 TEST_HEADER include/spdk/ftl.h 00:02:34.234 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:34.234 TEST_HEADER include/spdk/gpt_spec.h 00:02:34.234 TEST_HEADER include/spdk/hexlify.h 00:02:34.234 TEST_HEADER include/spdk/histogram_data.h 00:02:34.234 TEST_HEADER include/spdk/idxd.h 00:02:34.234 TEST_HEADER include/spdk/idxd_spec.h 00:02:34.234 TEST_HEADER include/spdk/init.h 00:02:34.234 TEST_HEADER include/spdk/ioat.h 00:02:34.234 TEST_HEADER include/spdk/iscsi_spec.h 00:02:34.234 TEST_HEADER include/spdk/ioat_spec.h 00:02:34.234 TEST_HEADER include/spdk/json.h 00:02:34.234 TEST_HEADER include/spdk/jsonrpc.h 00:02:34.234 TEST_HEADER include/spdk/keyring.h 00:02:34.234 TEST_HEADER include/spdk/keyring_module.h 00:02:34.234 TEST_HEADER include/spdk/likely.h 00:02:34.234 TEST_HEADER include/spdk/log.h 00:02:34.234 TEST_HEADER include/spdk/lvol.h 00:02:34.234 TEST_HEADER include/spdk/md5.h 00:02:34.234 TEST_HEADER include/spdk/memory.h 00:02:34.234 TEST_HEADER include/spdk/mmio.h 00:02:34.234 TEST_HEADER include/spdk/nbd.h 00:02:34.234 TEST_HEADER include/spdk/net.h 00:02:34.234 TEST_HEADER include/spdk/notify.h 00:02:34.234 TEST_HEADER include/spdk/nvme.h 00:02:34.234 TEST_HEADER include/spdk/nvme_intel.h 00:02:34.234 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:34.234 TEST_HEADER include/spdk/nvme_spec.h 00:02:34.234 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:34.234 TEST_HEADER include/spdk/nvme_zns.h 00:02:34.234 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:34.234 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:34.234 TEST_HEADER include/spdk/nvmf.h 00:02:34.234 TEST_HEADER include/spdk/nvmf_transport.h 00:02:34.234 TEST_HEADER include/spdk/nvmf_spec.h 00:02:34.234 TEST_HEADER include/spdk/opal.h 00:02:34.234 TEST_HEADER include/spdk/opal_spec.h 00:02:34.234 TEST_HEADER include/spdk/pipe.h 00:02:34.234 TEST_HEADER include/spdk/pci_ids.h 00:02:34.234 TEST_HEADER include/spdk/queue.h 00:02:34.234 TEST_HEADER include/spdk/reduce.h 00:02:34.234 TEST_HEADER include/spdk/rpc.h 00:02:34.234 TEST_HEADER include/spdk/scheduler.h 00:02:34.234 TEST_HEADER include/spdk/scsi.h 00:02:34.234 TEST_HEADER include/spdk/scsi_spec.h 00:02:34.234 TEST_HEADER include/spdk/sock.h 00:02:34.234 TEST_HEADER include/spdk/stdinc.h 00:02:34.234 TEST_HEADER include/spdk/string.h 00:02:34.234 TEST_HEADER include/spdk/thread.h 00:02:34.234 TEST_HEADER include/spdk/trace.h 00:02:34.234 TEST_HEADER include/spdk/trace_parser.h 00:02:34.234 TEST_HEADER include/spdk/tree.h 00:02:34.234 TEST_HEADER include/spdk/ublk.h 00:02:34.234 TEST_HEADER include/spdk/util.h 00:02:34.234 TEST_HEADER include/spdk/uuid.h 00:02:34.234 TEST_HEADER include/spdk/version.h 00:02:34.234 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:34.234 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:34.234 TEST_HEADER include/spdk/vhost.h 00:02:34.234 TEST_HEADER include/spdk/vmd.h 00:02:34.234 TEST_HEADER include/spdk/xor.h 00:02:34.234 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:34.234 TEST_HEADER include/spdk/zipf.h 00:02:34.234 CC app/spdk_dd/spdk_dd.o 00:02:34.234 CXX test/cpp_headers/accel.o 00:02:34.234 CXX test/cpp_headers/accel_module.o 00:02:34.234 CXX test/cpp_headers/assert.o 00:02:34.234 CXX test/cpp_headers/barrier.o 00:02:34.234 CXX test/cpp_headers/base64.o 00:02:34.234 CXX test/cpp_headers/bdev.o 00:02:34.234 CXX test/cpp_headers/bdev_module.o 00:02:34.234 CXX test/cpp_headers/bdev_zone.o 00:02:34.234 CXX test/cpp_headers/bit_array.o 00:02:34.234 CXX test/cpp_headers/bit_pool.o 00:02:34.234 CXX test/cpp_headers/blob_bdev.o 00:02:34.234 CXX test/cpp_headers/blobfs_bdev.o 00:02:34.234 CXX test/cpp_headers/blobfs.o 00:02:34.234 CXX test/cpp_headers/blob.o 00:02:34.234 CXX test/cpp_headers/conf.o 00:02:34.234 CXX test/cpp_headers/config.o 00:02:34.234 CXX test/cpp_headers/cpuset.o 00:02:34.234 CXX test/cpp_headers/crc16.o 00:02:34.234 CC app/iscsi_tgt/iscsi_tgt.o 00:02:34.234 CC app/nvmf_tgt/nvmf_main.o 00:02:34.234 CC app/spdk_tgt/spdk_tgt.o 00:02:34.234 CXX test/cpp_headers/crc32.o 00:02:34.234 CC examples/util/zipf/zipf.o 00:02:34.234 CC test/app/histogram_perf/histogram_perf.o 00:02:34.234 CC examples/ioat/verify/verify.o 00:02:34.234 CC examples/ioat/perf/perf.o 00:02:34.234 CC app/fio/nvme/fio_plugin.o 00:02:34.234 CC test/env/vtophys/vtophys.o 00:02:34.234 CC test/app/stub/stub.o 00:02:34.234 CC test/thread/poller_perf/poller_perf.o 00:02:34.234 CC test/env/memory/memory_ut.o 00:02:34.234 CC test/app/jsoncat/jsoncat.o 00:02:34.234 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:34.234 CC test/env/pci/pci_ut.o 00:02:34.234 CC app/fio/bdev/fio_plugin.o 00:02:34.234 CC test/dma/test_dma/test_dma.o 00:02:34.234 CC test/app/bdev_svc/bdev_svc.o 00:02:34.496 CC test/env/mem_callbacks/mem_callbacks.o 00:02:34.496 LINK spdk_lspci 00:02:34.496 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:34.496 LINK rpc_client_test 00:02:34.496 LINK spdk_nvme_discover 00:02:34.496 LINK histogram_perf 00:02:34.496 LINK jsoncat 00:02:34.763 LINK zipf 00:02:34.763 LINK vtophys 00:02:34.763 LINK poller_perf 00:02:34.763 CXX test/cpp_headers/crc64.o 00:02:34.763 LINK interrupt_tgt 00:02:34.763 CXX test/cpp_headers/dif.o 00:02:34.763 LINK nvmf_tgt 00:02:34.763 CXX test/cpp_headers/dma.o 00:02:34.763 CXX test/cpp_headers/endian.o 00:02:34.763 CXX test/cpp_headers/env_dpdk.o 00:02:34.763 CXX test/cpp_headers/env.o 00:02:34.763 LINK iscsi_tgt 00:02:34.763 CXX test/cpp_headers/event.o 00:02:34.763 LINK env_dpdk_post_init 00:02:34.763 CXX test/cpp_headers/fd_group.o 00:02:34.763 CXX test/cpp_headers/fd.o 00:02:34.763 CXX test/cpp_headers/file.o 00:02:34.763 CXX test/cpp_headers/fsdev.o 00:02:34.763 LINK spdk_tgt 00:02:34.763 CXX test/cpp_headers/fsdev_module.o 00:02:34.763 CXX test/cpp_headers/ftl.o 00:02:34.763 LINK bdev_svc 00:02:34.763 CXX test/cpp_headers/fuse_dispatcher.o 00:02:34.763 LINK stub 00:02:34.763 CXX test/cpp_headers/gpt_spec.o 00:02:34.763 CXX test/cpp_headers/hexlify.o 00:02:34.763 LINK spdk_trace_record 00:02:34.763 LINK verify 00:02:34.763 CXX test/cpp_headers/histogram_data.o 00:02:34.763 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:34.763 LINK ioat_perf 00:02:34.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:35.023 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:35.023 CXX test/cpp_headers/idxd.o 00:02:35.023 CXX test/cpp_headers/idxd_spec.o 00:02:35.023 CXX test/cpp_headers/init.o 00:02:35.023 CXX test/cpp_headers/ioat.o 00:02:35.023 CXX test/cpp_headers/ioat_spec.o 00:02:35.023 CXX test/cpp_headers/iscsi_spec.o 00:02:35.023 CXX test/cpp_headers/json.o 00:02:35.023 CXX test/cpp_headers/jsonrpc.o 00:02:35.023 CXX test/cpp_headers/keyring.o 00:02:35.023 CXX test/cpp_headers/keyring_module.o 00:02:35.023 LINK spdk_dd 00:02:35.023 LINK spdk_trace 00:02:35.023 CXX test/cpp_headers/log.o 00:02:35.023 CXX test/cpp_headers/lvol.o 00:02:35.023 CXX test/cpp_headers/md5.o 00:02:35.023 CXX test/cpp_headers/likely.o 00:02:35.023 CXX test/cpp_headers/memory.o 00:02:35.023 CXX test/cpp_headers/mmio.o 00:02:35.023 CXX test/cpp_headers/nbd.o 00:02:35.023 CXX test/cpp_headers/net.o 00:02:35.023 CXX test/cpp_headers/notify.o 00:02:35.023 CXX test/cpp_headers/nvme.o 00:02:35.287 CXX test/cpp_headers/nvme_intel.o 00:02:35.287 CXX test/cpp_headers/nvme_ocssd.o 00:02:35.287 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:35.287 CXX test/cpp_headers/nvme_spec.o 00:02:35.287 CXX test/cpp_headers/nvme_zns.o 00:02:35.287 CC test/event/event_perf/event_perf.o 00:02:35.287 LINK pci_ut 00:02:35.287 CC test/event/reactor/reactor.o 00:02:35.287 CXX test/cpp_headers/nvmf_cmd.o 00:02:35.287 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:35.287 CXX test/cpp_headers/nvmf.o 00:02:35.287 CC test/event/reactor_perf/reactor_perf.o 00:02:35.287 CC test/event/app_repeat/app_repeat.o 00:02:35.287 CXX test/cpp_headers/nvmf_spec.o 00:02:35.287 CXX test/cpp_headers/nvmf_transport.o 00:02:35.287 CC examples/sock/hello_world/hello_sock.o 00:02:35.287 CC examples/vmd/led/led.o 00:02:35.287 CC examples/vmd/lsvmd/lsvmd.o 00:02:35.287 CC examples/idxd/perf/perf.o 00:02:35.287 CC test/event/scheduler/scheduler.o 00:02:35.548 CXX test/cpp_headers/opal.o 00:02:35.548 CXX test/cpp_headers/opal_spec.o 00:02:35.548 CC examples/thread/thread/thread_ex.o 00:02:35.548 CXX test/cpp_headers/pci_ids.o 00:02:35.548 CXX test/cpp_headers/pipe.o 00:02:35.548 CXX test/cpp_headers/queue.o 00:02:35.548 CXX test/cpp_headers/reduce.o 00:02:35.548 CXX test/cpp_headers/rpc.o 00:02:35.548 LINK spdk_bdev 00:02:35.548 CXX test/cpp_headers/scheduler.o 00:02:35.548 LINK test_dma 00:02:35.548 CXX test/cpp_headers/scsi.o 00:02:35.548 LINK nvme_fuzz 00:02:35.548 CXX test/cpp_headers/scsi_spec.o 00:02:35.548 CXX test/cpp_headers/sock.o 00:02:35.548 CXX test/cpp_headers/stdinc.o 00:02:35.548 CXX test/cpp_headers/string.o 00:02:35.548 CXX test/cpp_headers/thread.o 00:02:35.548 CXX test/cpp_headers/trace.o 00:02:35.548 LINK event_perf 00:02:35.548 LINK reactor 00:02:35.548 CXX test/cpp_headers/trace_parser.o 00:02:35.548 CXX test/cpp_headers/tree.o 00:02:35.548 CXX test/cpp_headers/ublk.o 00:02:35.548 CXX test/cpp_headers/util.o 00:02:35.548 LINK mem_callbacks 00:02:35.548 LINK reactor_perf 00:02:35.809 CXX test/cpp_headers/uuid.o 00:02:35.809 LINK spdk_nvme 00:02:35.809 CXX test/cpp_headers/version.o 00:02:35.809 CXX test/cpp_headers/vfio_user_pci.o 00:02:35.809 CC app/vhost/vhost.o 00:02:35.809 LINK lsvmd 00:02:35.809 LINK app_repeat 00:02:35.809 LINK led 00:02:35.809 CXX test/cpp_headers/vfio_user_spec.o 00:02:35.809 CXX test/cpp_headers/vhost.o 00:02:35.809 CXX test/cpp_headers/vmd.o 00:02:35.809 CXX test/cpp_headers/xor.o 00:02:35.809 CXX test/cpp_headers/zipf.o 00:02:35.809 LINK vhost_fuzz 00:02:36.100 LINK scheduler 00:02:36.100 LINK thread 00:02:36.100 LINK hello_sock 00:02:36.100 LINK vhost 00:02:36.100 LINK spdk_nvme_perf 00:02:36.100 CC test/nvme/reset/reset.o 00:02:36.100 CC test/nvme/aer/aer.o 00:02:36.100 CC test/nvme/overhead/overhead.o 00:02:36.100 CC test/nvme/startup/startup.o 00:02:36.100 CC test/nvme/fused_ordering/fused_ordering.o 00:02:36.100 CC test/nvme/reserve/reserve.o 00:02:36.100 CC test/nvme/compliance/nvme_compliance.o 00:02:36.100 CC test/nvme/cuse/cuse.o 00:02:36.100 CC test/nvme/boot_partition/boot_partition.o 00:02:36.100 CC test/nvme/sgl/sgl.o 00:02:36.100 CC test/nvme/e2edp/nvme_dp.o 00:02:36.100 CC test/nvme/err_injection/err_injection.o 00:02:36.100 CC test/nvme/simple_copy/simple_copy.o 00:02:36.100 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:36.100 CC test/nvme/connect_stress/connect_stress.o 00:02:36.100 CC test/nvme/fdp/fdp.o 00:02:36.100 CC test/blobfs/mkfs/mkfs.o 00:02:36.383 CC test/accel/dif/dif.o 00:02:36.383 LINK idxd_perf 00:02:36.383 LINK spdk_top 00:02:36.383 LINK spdk_nvme_identify 00:02:36.383 CC test/lvol/esnap/esnap.o 00:02:36.383 LINK startup 00:02:36.383 LINK connect_stress 00:02:36.383 LINK doorbell_aers 00:02:36.383 CC examples/accel/perf/accel_perf.o 00:02:36.383 LINK err_injection 00:02:36.383 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:36.383 CC examples/nvme/abort/abort.o 00:02:36.383 CC examples/nvme/reconnect/reconnect.o 00:02:36.383 CC examples/blob/cli/blobcli.o 00:02:36.383 LINK reserve 00:02:36.383 CC examples/blob/hello_world/hello_blob.o 00:02:36.383 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:36.383 LINK boot_partition 00:02:36.383 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:36.383 CC examples/nvme/hotplug/hotplug.o 00:02:36.383 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:36.383 CC examples/nvme/hello_world/hello_world.o 00:02:36.383 CC examples/nvme/arbitration/arbitration.o 00:02:36.647 LINK fused_ordering 00:02:36.647 LINK mkfs 00:02:36.647 LINK overhead 00:02:36.647 LINK sgl 00:02:36.647 LINK nvme_dp 00:02:36.647 LINK reset 00:02:36.647 LINK simple_copy 00:02:36.647 LINK nvme_compliance 00:02:36.647 LINK aer 00:02:36.647 LINK fdp 00:02:36.905 LINK hotplug 00:02:36.905 LINK cmb_copy 00:02:36.905 LINK hello_world 00:02:36.905 LINK pmr_persistence 00:02:36.905 LINK memory_ut 00:02:36.905 LINK hello_blob 00:02:36.905 LINK arbitration 00:02:36.905 LINK hello_fsdev 00:02:37.163 LINK abort 00:02:37.163 LINK reconnect 00:02:37.163 LINK nvme_manage 00:02:37.163 LINK blobcli 00:02:37.421 LINK accel_perf 00:02:37.421 LINK dif 00:02:37.680 CC examples/bdev/hello_world/hello_bdev.o 00:02:37.680 CC examples/bdev/bdevperf/bdevperf.o 00:02:37.680 CC test/bdev/bdevio/bdevio.o 00:02:37.937 LINK iscsi_fuzz 00:02:37.937 LINK hello_bdev 00:02:38.195 LINK cuse 00:02:38.195 LINK bdevio 00:02:38.762 LINK bdevperf 00:02:39.328 CC examples/nvmf/nvmf/nvmf.o 00:02:39.586 LINK nvmf 00:02:43.775 LINK esnap 00:02:44.034 00:02:44.034 real 1m20.993s 00:02:44.034 user 13m9.913s 00:02:44.034 sys 2m32.841s 00:02:44.034 18:04:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.034 18:04:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:44.034 ************************************ 00:02:44.034 END TEST make 00:02:44.034 ************************************ 00:02:44.034 18:04:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:44.034 18:04:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:44.034 18:04:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:44.034 18:04:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.034 18:04:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:44.034 18:04:55 -- pm/common@44 -- $ pid=2249975 00:02:44.034 18:04:55 -- pm/common@50 -- $ kill -TERM 2249975 00:02:44.034 18:04:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.034 18:04:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:44.034 18:04:55 -- pm/common@44 -- $ pid=2249977 00:02:44.034 18:04:55 -- pm/common@50 -- $ kill -TERM 2249977 00:02:44.034 18:04:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.035 18:04:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:44.035 18:04:55 -- pm/common@44 -- $ pid=2249979 00:02:44.035 18:04:55 -- pm/common@50 -- $ kill -TERM 2249979 00:02:44.035 18:04:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.035 18:04:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:44.035 18:04:55 -- pm/common@44 -- $ pid=2250009 00:02:44.035 18:04:55 -- pm/common@50 -- $ sudo -E kill -TERM 2250009 00:02:44.035 18:04:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:44.035 18:04:55 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.035 18:04:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:44.035 18:04:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:44.035 18:04:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:44.035 18:04:56 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:44.035 18:04:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:44.035 18:04:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:44.035 18:04:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:44.035 18:04:56 -- scripts/common.sh@336 -- # IFS=.-: 00:02:44.035 18:04:56 -- scripts/common.sh@336 -- # read -ra ver1 00:02:44.035 18:04:56 -- scripts/common.sh@337 -- # IFS=.-: 00:02:44.035 18:04:56 -- scripts/common.sh@337 -- # read -ra ver2 00:02:44.035 18:04:56 -- scripts/common.sh@338 -- # local 'op=<' 00:02:44.035 18:04:56 -- scripts/common.sh@340 -- # ver1_l=2 00:02:44.035 18:04:56 -- scripts/common.sh@341 -- # ver2_l=1 00:02:44.035 18:04:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:44.035 18:04:56 -- scripts/common.sh@344 -- # case "$op" in 00:02:44.035 18:04:56 -- scripts/common.sh@345 -- # : 1 00:02:44.035 18:04:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:44.035 18:04:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.035 18:04:56 -- scripts/common.sh@365 -- # decimal 1 00:02:44.035 18:04:56 -- scripts/common.sh@353 -- # local d=1 00:02:44.035 18:04:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:44.035 18:04:56 -- scripts/common.sh@355 -- # echo 1 00:02:44.035 18:04:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:44.035 18:04:56 -- scripts/common.sh@366 -- # decimal 2 00:02:44.035 18:04:56 -- scripts/common.sh@353 -- # local d=2 00:02:44.035 18:04:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:44.035 18:04:56 -- scripts/common.sh@355 -- # echo 2 00:02:44.035 18:04:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:44.035 18:04:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:44.035 18:04:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:44.035 18:04:56 -- scripts/common.sh@368 -- # return 0 00:02:44.035 18:04:56 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:44.035 18:04:56 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.035 --rc genhtml_branch_coverage=1 00:02:44.035 --rc genhtml_function_coverage=1 00:02:44.035 --rc genhtml_legend=1 00:02:44.035 --rc geninfo_all_blocks=1 00:02:44.035 --rc geninfo_unexecuted_blocks=1 00:02:44.035 00:02:44.035 ' 00:02:44.035 18:04:56 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.035 --rc genhtml_branch_coverage=1 00:02:44.035 --rc genhtml_function_coverage=1 00:02:44.035 --rc genhtml_legend=1 00:02:44.035 --rc geninfo_all_blocks=1 00:02:44.035 --rc geninfo_unexecuted_blocks=1 00:02:44.035 00:02:44.035 ' 00:02:44.035 18:04:56 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.035 --rc genhtml_branch_coverage=1 00:02:44.035 --rc genhtml_function_coverage=1 00:02:44.035 --rc genhtml_legend=1 00:02:44.035 --rc geninfo_all_blocks=1 00:02:44.035 --rc geninfo_unexecuted_blocks=1 00:02:44.035 00:02:44.035 ' 00:02:44.035 18:04:56 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:44.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.035 --rc genhtml_branch_coverage=1 00:02:44.035 --rc genhtml_function_coverage=1 00:02:44.035 --rc genhtml_legend=1 00:02:44.035 --rc geninfo_all_blocks=1 00:02:44.035 --rc geninfo_unexecuted_blocks=1 00:02:44.035 00:02:44.035 ' 00:02:44.035 18:04:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:44.035 18:04:56 -- nvmf/common.sh@7 -- # uname -s 00:02:44.035 18:04:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:44.035 18:04:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:44.035 18:04:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:44.035 18:04:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:44.035 18:04:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:44.035 18:04:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:44.035 18:04:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:44.035 18:04:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:44.035 18:04:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:44.035 18:04:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:44.035 18:04:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:44.035 18:04:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:44.035 18:04:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:44.035 18:04:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:44.035 18:04:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:44.035 18:04:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:44.035 18:04:56 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:44.035 18:04:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:44.035 18:04:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:44.035 18:04:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.035 18:04:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.035 18:04:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.035 18:04:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.035 18:04:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.035 18:04:56 -- paths/export.sh@5 -- # export PATH 00:02:44.035 18:04:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.035 18:04:56 -- nvmf/common.sh@51 -- # : 0 00:02:44.035 18:04:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:44.035 18:04:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:44.035 18:04:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:44.035 18:04:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:44.035 18:04:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:44.035 18:04:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:44.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:44.035 18:04:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:44.035 18:04:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:44.035 18:04:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:44.035 18:04:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:44.036 18:04:56 -- spdk/autotest.sh@32 -- # uname -s 00:02:44.036 18:04:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:44.036 18:04:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:44.036 18:04:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:44.036 18:04:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:44.036 18:04:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:44.036 18:04:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:44.036 18:04:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:44.036 18:04:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:44.036 18:04:56 -- spdk/autotest.sh@48 -- # udevadm_pid=2310174 00:02:44.036 18:04:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:44.036 18:04:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:44.036 18:04:56 -- pm/common@17 -- # local monitor 00:02:44.036 18:04:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.036 18:04:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.036 18:04:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.036 18:04:56 -- pm/common@21 -- # date +%s 00:02:44.036 18:04:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.036 18:04:56 -- pm/common@21 -- # date +%s 00:02:44.036 18:04:56 -- pm/common@25 -- # sleep 1 00:02:44.036 18:04:56 -- pm/common@21 -- # date +%s 00:02:44.036 18:04:56 -- pm/common@21 -- # date +%s 00:02:44.036 18:04:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732813496 00:02:44.036 18:04:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732813496 00:02:44.036 18:04:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732813496 00:02:44.036 18:04:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732813496 00:02:44.036 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732813496_collect-cpu-load.pm.log 00:02:44.036 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732813496_collect-vmstat.pm.log 00:02:44.036 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732813496_collect-cpu-temp.pm.log 00:02:44.036 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732813496_collect-bmc-pm.bmc.pm.log 00:02:45.414 18:04:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:45.414 18:04:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:45.414 18:04:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:45.414 18:04:57 -- common/autotest_common.sh@10 -- # set +x 00:02:45.414 18:04:57 -- spdk/autotest.sh@59 -- # create_test_list 00:02:45.414 18:04:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:45.414 18:04:57 -- common/autotest_common.sh@10 -- # set +x 00:02:45.414 18:04:57 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:45.414 18:04:57 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.414 18:04:57 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.414 18:04:57 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:45.414 18:04:57 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.414 18:04:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:45.414 18:04:57 -- common/autotest_common.sh@1457 -- # uname 00:02:45.414 18:04:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:45.414 18:04:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:45.414 18:04:57 -- common/autotest_common.sh@1477 -- # uname 00:02:45.414 18:04:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:45.414 18:04:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:45.414 18:04:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:45.414 lcov: LCOV version 1.15 00:02:45.415 18:04:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:07.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:07.334 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:22.241 18:05:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:22.241 18:05:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.241 18:05:33 -- common/autotest_common.sh@10 -- # set +x 00:03:22.241 18:05:33 -- spdk/autotest.sh@78 -- # rm -f 00:03:22.241 18:05:33 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.808 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:22.808 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:22.808 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:22.808 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:22.808 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:22.808 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:22.808 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:22.808 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:23.067 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:23.067 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:23.067 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:23.067 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:23.067 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:23.067 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:23.067 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:23.067 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:23.067 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:23.067 18:05:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:23.067 18:05:35 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:23.067 18:05:35 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:23.067 18:05:35 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:23.067 18:05:35 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:23.067 18:05:35 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:23.067 18:05:35 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:23.067 18:05:35 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.067 18:05:35 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:23.067 18:05:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:23.067 18:05:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:23.067 18:05:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:23.067 18:05:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:23.067 18:05:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:23.067 18:05:35 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:23.325 No valid GPT data, bailing 00:03:23.325 18:05:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:23.325 18:05:35 -- scripts/common.sh@394 -- # pt= 00:03:23.325 18:05:35 -- scripts/common.sh@395 -- # return 1 00:03:23.325 18:05:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:23.325 1+0 records in 00:03:23.325 1+0 records out 00:03:23.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00212725 s, 493 MB/s 00:03:23.325 18:05:35 -- spdk/autotest.sh@105 -- # sync 00:03:23.325 18:05:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:23.325 18:05:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:23.325 18:05:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.857 18:05:37 -- spdk/autotest.sh@111 -- # uname -s 00:03:25.857 18:05:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:25.857 18:05:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:25.857 18:05:37 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:26.792 Hugepages 00:03:26.792 node hugesize free / total 00:03:26.792 node0 1048576kB 0 / 0 00:03:26.792 node0 2048kB 0 / 0 00:03:26.792 node1 1048576kB 0 / 0 00:03:26.792 node1 2048kB 0 / 0 00:03:26.792 00:03:26.792 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.792 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:26.792 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:26.792 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:26.792 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:26.792 18:05:38 -- spdk/autotest.sh@117 -- # uname -s 00:03:26.792 18:05:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:26.792 18:05:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:26.792 18:05:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.168 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:28.168 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:28.168 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:29.104 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:29.104 18:05:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:30.040 18:05:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:30.040 18:05:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:30.040 18:05:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:30.040 18:05:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:30.040 18:05:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:30.040 18:05:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:30.040 18:05:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:30.040 18:05:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:30.040 18:05:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:30.040 18:05:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:30.040 18:05:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:30.040 18:05:42 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.973 Waiting for block devices as requested 00:03:31.232 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:31.232 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:31.490 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:31.490 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:31.490 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:31.490 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:31.748 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:31.748 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:31.748 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:31.748 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:32.006 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:32.006 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:32.006 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:32.006 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:32.263 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:32.263 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:32.263 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:32.522 18:05:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:32.522 18:05:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:32.522 18:05:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:32.522 18:05:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:32.522 18:05:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:32.522 18:05:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:32.522 18:05:44 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:32.522 18:05:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:32.522 18:05:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:32.522 18:05:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:32.522 18:05:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:32.522 18:05:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:32.522 18:05:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:32.522 18:05:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:32.522 18:05:44 -- common/autotest_common.sh@1543 -- # continue 00:03:32.522 18:05:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:32.522 18:05:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:32.522 18:05:44 -- common/autotest_common.sh@10 -- # set +x 00:03:32.522 18:05:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:32.522 18:05:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:32.522 18:05:44 -- common/autotest_common.sh@10 -- # set +x 00:03:32.522 18:05:44 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.917 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:33.917 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:33.917 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:34.493 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:34.752 18:05:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:34.752 18:05:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.752 18:05:46 -- common/autotest_common.sh@10 -- # set +x 00:03:34.752 18:05:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:34.752 18:05:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:34.752 18:05:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:34.752 18:05:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:34.752 18:05:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:34.752 18:05:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:34.752 18:05:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:34.752 18:05:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:34.752 18:05:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:34.752 18:05:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:34.752 18:05:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.752 18:05:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.752 18:05:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:34.752 18:05:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:34.752 18:05:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:34.752 18:05:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:34.752 18:05:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:35.011 18:05:46 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:35.011 18:05:46 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:35.011 18:05:46 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:35.011 18:05:46 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:35.011 18:05:46 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:35.011 18:05:46 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:35.011 18:05:46 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2320566 00:03:35.011 18:05:46 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.011 18:05:46 -- common/autotest_common.sh@1585 -- # waitforlisten 2320566 00:03:35.011 18:05:46 -- common/autotest_common.sh@835 -- # '[' -z 2320566 ']' 00:03:35.011 18:05:46 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.011 18:05:46 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.011 18:05:46 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.011 18:05:46 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.011 18:05:46 -- common/autotest_common.sh@10 -- # set +x 00:03:35.011 [2024-11-28 18:05:46.984610] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:03:35.011 [2024-11-28 18:05:46.984779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320566 ] 00:03:35.270 [2024-11-28 18:05:47.129677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.270 [2024-11-28 18:05:47.269492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.204 18:05:48 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.204 18:05:48 -- common/autotest_common.sh@868 -- # return 0 00:03:36.204 18:05:48 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:36.204 18:05:48 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:36.204 18:05:48 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:39.486 nvme0n1 00:03:39.486 18:05:51 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:39.745 [2024-11-28 18:05:51.646032] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:39.745 [2024-11-28 18:05:51.646121] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:39.745 request: 00:03:39.745 { 00:03:39.745 "nvme_ctrlr_name": "nvme0", 00:03:39.745 "password": "test", 00:03:39.745 "method": "bdev_nvme_opal_revert", 00:03:39.745 "req_id": 1 00:03:39.745 } 00:03:39.745 Got JSON-RPC error response 00:03:39.745 response: 00:03:39.745 { 00:03:39.745 "code": -32603, 00:03:39.745 "message": "Internal error" 00:03:39.745 } 00:03:39.745 18:05:51 -- common/autotest_common.sh@1591 -- # true 00:03:39.745 18:05:51 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:39.745 18:05:51 -- common/autotest_common.sh@1595 -- # killprocess 2320566 00:03:39.745 18:05:51 -- common/autotest_common.sh@954 -- # '[' -z 2320566 ']' 00:03:39.745 18:05:51 -- common/autotest_common.sh@958 -- # kill -0 2320566 00:03:39.745 18:05:51 -- common/autotest_common.sh@959 -- # uname 00:03:39.745 18:05:51 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.745 18:05:51 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320566 00:03:39.745 18:05:51 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.745 18:05:51 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.745 18:05:51 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320566' 00:03:39.745 killing process with pid 2320566 00:03:39.745 18:05:51 -- common/autotest_common.sh@973 -- # kill 2320566 00:03:39.745 18:05:51 -- common/autotest_common.sh@978 -- # wait 2320566 00:03:43.027 18:05:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:43.027 18:05:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:43.027 18:05:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:43.027 18:05:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:43.027 18:05:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:43.027 18:05:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.027 18:05:55 -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 18:05:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:43.284 18:05:55 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:43.284 18:05:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.284 18:05:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.284 18:05:55 -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 START TEST env 00:03:43.284 ************************************ 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:43.284 * Looking for test storage... 00:03:43.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.284 18:05:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.284 18:05:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.284 18:05:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.284 18:05:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.284 18:05:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.284 18:05:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.284 18:05:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.284 18:05:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.284 18:05:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.284 18:05:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.284 18:05:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.284 18:05:55 env -- scripts/common.sh@344 -- # case "$op" in 00:03:43.284 18:05:55 env -- scripts/common.sh@345 -- # : 1 00:03:43.284 18:05:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.284 18:05:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.284 18:05:55 env -- scripts/common.sh@365 -- # decimal 1 00:03:43.284 18:05:55 env -- scripts/common.sh@353 -- # local d=1 00:03:43.284 18:05:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.284 18:05:55 env -- scripts/common.sh@355 -- # echo 1 00:03:43.284 18:05:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.284 18:05:55 env -- scripts/common.sh@366 -- # decimal 2 00:03:43.284 18:05:55 env -- scripts/common.sh@353 -- # local d=2 00:03:43.284 18:05:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.284 18:05:55 env -- scripts/common.sh@355 -- # echo 2 00:03:43.284 18:05:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.284 18:05:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.284 18:05:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.284 18:05:55 env -- scripts/common.sh@368 -- # return 0 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.284 --rc genhtml_branch_coverage=1 00:03:43.284 --rc genhtml_function_coverage=1 00:03:43.284 --rc genhtml_legend=1 00:03:43.284 --rc geninfo_all_blocks=1 00:03:43.284 --rc geninfo_unexecuted_blocks=1 00:03:43.284 00:03:43.284 ' 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.284 --rc genhtml_branch_coverage=1 00:03:43.284 --rc genhtml_function_coverage=1 00:03:43.284 --rc genhtml_legend=1 00:03:43.284 --rc geninfo_all_blocks=1 00:03:43.284 --rc geninfo_unexecuted_blocks=1 00:03:43.284 00:03:43.284 ' 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.284 --rc genhtml_branch_coverage=1 00:03:43.284 --rc genhtml_function_coverage=1 00:03:43.284 --rc genhtml_legend=1 00:03:43.284 --rc geninfo_all_blocks=1 00:03:43.284 --rc geninfo_unexecuted_blocks=1 00:03:43.284 00:03:43.284 ' 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.284 --rc genhtml_branch_coverage=1 00:03:43.284 --rc genhtml_function_coverage=1 00:03:43.284 --rc genhtml_legend=1 00:03:43.284 --rc geninfo_all_blocks=1 00:03:43.284 --rc geninfo_unexecuted_blocks=1 00:03:43.284 00:03:43.284 ' 00:03:43.284 18:05:55 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.284 18:05:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.284 18:05:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.284 ************************************ 00:03:43.284 START TEST env_memory 00:03:43.284 ************************************ 00:03:43.284 18:05:55 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:43.284 00:03:43.284 00:03:43.284 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.284 http://cunit.sourceforge.net/ 00:03:43.284 00:03:43.284 00:03:43.284 Suite: memory 00:03:43.284 Test: alloc and free memory map ...[2024-11-28 18:05:55.373445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:43.542 passed 00:03:43.542 Test: mem map translation ...[2024-11-28 18:05:55.417204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:43.542 [2024-11-28 18:05:55.417251] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:43.542 [2024-11-28 18:05:55.417354] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:43.542 [2024-11-28 18:05:55.417404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:43.542 passed 00:03:43.542 Test: mem map registration ...[2024-11-28 18:05:55.487016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:43.542 [2024-11-28 18:05:55.487086] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:43.542 passed 00:03:43.542 Test: mem map adjacent registrations ...passed 00:03:43.542 00:03:43.542 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.542 suites 1 1 n/a 0 0 00:03:43.542 tests 4 4 4 0 0 00:03:43.542 asserts 152 152 152 0 n/a 00:03:43.542 00:03:43.542 Elapsed time = 0.237 seconds 00:03:43.542 00:03:43.542 real 0m0.257s 00:03:43.542 user 0m0.244s 00:03:43.542 sys 0m0.013s 00:03:43.542 18:05:55 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.542 18:05:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:43.542 ************************************ 00:03:43.542 END TEST env_memory 00:03:43.542 ************************************ 00:03:43.542 18:05:55 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.542 18:05:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.542 18:05:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.542 18:05:55 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.542 ************************************ 00:03:43.543 START TEST env_vtophys 00:03:43.543 ************************************ 00:03:43.543 18:05:55 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:43.801 EAL: lib.eal log level changed from notice to debug 00:03:43.801 EAL: Detected lcore 0 as core 0 on socket 0 00:03:43.801 EAL: Detected lcore 1 as core 1 on socket 0 00:03:43.801 EAL: Detected lcore 2 as core 2 on socket 0 00:03:43.801 EAL: Detected lcore 3 as core 3 on socket 0 00:03:43.801 EAL: Detected lcore 4 as core 4 on socket 0 00:03:43.801 EAL: Detected lcore 5 as core 5 on socket 0 00:03:43.801 EAL: Detected lcore 6 as core 8 on socket 0 00:03:43.801 EAL: Detected lcore 7 as core 9 on socket 0 00:03:43.801 EAL: Detected lcore 8 as core 10 on socket 0 00:03:43.801 EAL: Detected lcore 9 as core 11 on socket 0 00:03:43.801 EAL: Detected lcore 10 as core 12 on socket 0 00:03:43.801 EAL: Detected lcore 11 as core 13 on socket 0 00:03:43.801 EAL: Detected lcore 12 as core 0 on socket 1 00:03:43.801 EAL: Detected lcore 13 as core 1 on socket 1 00:03:43.801 EAL: Detected lcore 14 as core 2 on socket 1 00:03:43.801 EAL: Detected lcore 15 as core 3 on socket 1 00:03:43.801 EAL: Detected lcore 16 as core 4 on socket 1 00:03:43.801 EAL: Detected lcore 17 as core 5 on socket 1 00:03:43.801 EAL: Detected lcore 18 as core 8 on socket 1 00:03:43.801 EAL: Detected lcore 19 as core 9 on socket 1 00:03:43.801 EAL: Detected lcore 20 as core 10 on socket 1 00:03:43.801 EAL: Detected lcore 21 as core 11 on socket 1 00:03:43.801 EAL: Detected lcore 22 as core 12 on socket 1 00:03:43.801 EAL: Detected lcore 23 as core 13 on socket 1 00:03:43.801 EAL: Detected lcore 24 as core 0 on socket 0 00:03:43.801 EAL: Detected lcore 25 as core 1 on socket 0 00:03:43.801 EAL: Detected lcore 26 as core 2 on socket 0 00:03:43.801 EAL: Detected lcore 27 as core 3 on socket 0 00:03:43.801 EAL: Detected lcore 28 as core 4 on socket 0 00:03:43.801 EAL: Detected lcore 29 as core 5 on socket 0 00:03:43.801 EAL: Detected lcore 30 as core 8 on socket 0 00:03:43.801 EAL: Detected lcore 31 as core 9 on socket 0 00:03:43.801 EAL: Detected lcore 32 as core 10 on socket 0 00:03:43.801 EAL: Detected lcore 33 as core 11 on socket 0 00:03:43.801 EAL: Detected lcore 34 as core 12 on socket 0 00:03:43.801 EAL: Detected lcore 35 as core 13 on socket 0 00:03:43.801 EAL: Detected lcore 36 as core 0 on socket 1 00:03:43.801 EAL: Detected lcore 37 as core 1 on socket 1 00:03:43.801 EAL: Detected lcore 38 as core 2 on socket 1 00:03:43.801 EAL: Detected lcore 39 as core 3 on socket 1 00:03:43.801 EAL: Detected lcore 40 as core 4 on socket 1 00:03:43.801 EAL: Detected lcore 41 as core 5 on socket 1 00:03:43.801 EAL: Detected lcore 42 as core 8 on socket 1 00:03:43.801 EAL: Detected lcore 43 as core 9 on socket 1 00:03:43.801 EAL: Detected lcore 44 as core 10 on socket 1 00:03:43.801 EAL: Detected lcore 45 as core 11 on socket 1 00:03:43.801 EAL: Detected lcore 46 as core 12 on socket 1 00:03:43.801 EAL: Detected lcore 47 as core 13 on socket 1 00:03:43.801 EAL: Maximum logical cores by configuration: 128 00:03:43.801 EAL: Detected CPU lcores: 48 00:03:43.801 EAL: Detected NUMA nodes: 2 00:03:43.801 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:43.801 EAL: Detected shared linkage of DPDK 00:03:43.801 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.801 EAL: Bus pci wants IOVA as 'DC' 00:03:43.801 EAL: Buses did not request a specific IOVA mode. 00:03:43.801 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:43.801 EAL: Selected IOVA mode 'VA' 00:03:43.801 EAL: Probing VFIO support... 00:03:43.801 EAL: IOMMU type 1 (Type 1) is supported 00:03:43.801 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:43.801 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:43.801 EAL: VFIO support initialized 00:03:43.801 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.801 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:43.801 EAL: Setting up physically contiguous memory... 00:03:43.801 EAL: Setting maximum number of open files to 524288 00:03:43.801 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:43.801 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:43.801 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:43.801 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:43.801 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.801 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:43.801 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.801 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.801 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:43.801 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:43.801 EAL: Hugepages will be freed exactly as allocated. 00:03:43.801 EAL: No shared files mode enabled, IPC is disabled 00:03:43.801 EAL: No shared files mode enabled, IPC is disabled 00:03:43.801 EAL: TSC frequency is ~2700000 KHz 00:03:43.801 EAL: Main lcore 0 is ready (tid=7f06402fda40;cpuset=[0]) 00:03:43.801 EAL: Trying to obtain current memory policy. 00:03:43.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.801 EAL: Restoring previous memory policy: 0 00:03:43.801 EAL: request: mp_malloc_sync 00:03:43.801 EAL: No shared files mode enabled, IPC is disabled 00:03:43.801 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.801 EAL: No shared files mode enabled, IPC is disabled 00:03:43.801 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.801 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.801 00:03:43.801 00:03:43.801 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.801 http://cunit.sourceforge.net/ 00:03:43.801 00:03:43.801 00:03:43.801 Suite: components_suite 00:03:44.367 Test: vtophys_malloc_test ...passed 00:03:44.367 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.367 EAL: Restoring previous memory policy: 4 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.367 EAL: Trying to obtain current memory policy. 00:03:44.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.367 EAL: Restoring previous memory policy: 4 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.367 EAL: Trying to obtain current memory policy. 00:03:44.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.367 EAL: Restoring previous memory policy: 4 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.367 EAL: Trying to obtain current memory policy. 00:03:44.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.367 EAL: Restoring previous memory policy: 4 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.367 EAL: Trying to obtain current memory policy. 00:03:44.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.367 EAL: Restoring previous memory policy: 4 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.367 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.367 EAL: request: mp_malloc_sync 00:03:44.367 EAL: No shared files mode enabled, IPC is disabled 00:03:44.367 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.367 EAL: Trying to obtain current memory policy. 00:03:44.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.624 EAL: Restoring previous memory policy: 4 00:03:44.624 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.624 EAL: request: mp_malloc_sync 00:03:44.624 EAL: No shared files mode enabled, IPC is disabled 00:03:44.624 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.624 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.624 EAL: request: mp_malloc_sync 00:03:44.624 EAL: No shared files mode enabled, IPC is disabled 00:03:44.624 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.624 EAL: Trying to obtain current memory policy. 00:03:44.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.881 EAL: Restoring previous memory policy: 4 00:03:44.881 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.881 EAL: request: mp_malloc_sync 00:03:44.881 EAL: No shared files mode enabled, IPC is disabled 00:03:44.881 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.881 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.138 EAL: request: mp_malloc_sync 00:03:45.138 EAL: No shared files mode enabled, IPC is disabled 00:03:45.138 EAL: Heap on socket 0 was shrunk by 130MB 00:03:45.138 EAL: Trying to obtain current memory policy. 00:03:45.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.448 EAL: Restoring previous memory policy: 4 00:03:45.448 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.448 EAL: request: mp_malloc_sync 00:03:45.448 EAL: No shared files mode enabled, IPC is disabled 00:03:45.448 EAL: Heap on socket 0 was expanded by 258MB 00:03:45.705 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.705 EAL: request: mp_malloc_sync 00:03:45.705 EAL: No shared files mode enabled, IPC is disabled 00:03:45.705 EAL: Heap on socket 0 was shrunk by 258MB 00:03:46.270 EAL: Trying to obtain current memory policy. 00:03:46.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:46.270 EAL: Restoring previous memory policy: 4 00:03:46.270 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.270 EAL: request: mp_malloc_sync 00:03:46.270 EAL: No shared files mode enabled, IPC is disabled 00:03:46.270 EAL: Heap on socket 0 was expanded by 514MB 00:03:47.201 EAL: Calling mem event callback 'spdk:(nil)' 00:03:47.459 EAL: request: mp_malloc_sync 00:03:47.459 EAL: No shared files mode enabled, IPC is disabled 00:03:47.459 EAL: Heap on socket 0 was shrunk by 514MB 00:03:48.392 EAL: Trying to obtain current memory policy. 00:03:48.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:48.392 EAL: Restoring previous memory policy: 4 00:03:48.393 EAL: Calling mem event callback 'spdk:(nil)' 00:03:48.393 EAL: request: mp_malloc_sync 00:03:48.393 EAL: No shared files mode enabled, IPC is disabled 00:03:48.393 EAL: Heap on socket 0 was expanded by 1026MB 00:03:50.289 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.547 EAL: request: mp_malloc_sync 00:03:50.547 EAL: No shared files mode enabled, IPC is disabled 00:03:50.547 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:52.447 passed 00:03:52.447 00:03:52.447 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.447 suites 1 1 n/a 0 0 00:03:52.447 tests 2 2 2 0 0 00:03:52.447 asserts 497 497 497 0 n/a 00:03:52.447 00:03:52.447 Elapsed time = 8.206 seconds 00:03:52.447 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.447 EAL: request: mp_malloc_sync 00:03:52.447 EAL: No shared files mode enabled, IPC is disabled 00:03:52.447 EAL: Heap on socket 0 was shrunk by 2MB 00:03:52.447 EAL: No shared files mode enabled, IPC is disabled 00:03:52.447 EAL: No shared files mode enabled, IPC is disabled 00:03:52.447 EAL: No shared files mode enabled, IPC is disabled 00:03:52.447 00:03:52.448 real 0m8.509s 00:03:52.448 user 0m7.393s 00:03:52.448 sys 0m1.048s 00:03:52.448 18:06:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.448 18:06:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:52.448 ************************************ 00:03:52.448 END TEST env_vtophys 00:03:52.448 ************************************ 00:03:52.448 18:06:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.448 18:06:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.448 18:06:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.448 18:06:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.448 ************************************ 00:03:52.448 START TEST env_pci 00:03:52.448 ************************************ 00:03:52.448 18:06:04 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.448 00:03:52.448 00:03:52.448 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.448 http://cunit.sourceforge.net/ 00:03:52.448 00:03:52.448 00:03:52.448 Suite: pci 00:03:52.448 Test: pci_hook ...[2024-11-28 18:06:04.205767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2322794 has claimed it 00:03:52.448 EAL: Cannot find device (10000:00:01.0) 00:03:52.448 EAL: Failed to attach device on primary process 00:03:52.448 passed 00:03:52.448 00:03:52.448 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.448 suites 1 1 n/a 0 0 00:03:52.448 tests 1 1 1 0 0 00:03:52.448 asserts 25 25 25 0 n/a 00:03:52.448 00:03:52.448 Elapsed time = 0.044 seconds 00:03:52.448 00:03:52.448 real 0m0.096s 00:03:52.448 user 0m0.037s 00:03:52.448 sys 0m0.058s 00:03:52.448 18:06:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.448 18:06:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:52.448 ************************************ 00:03:52.448 END TEST env_pci 00:03:52.448 ************************************ 00:03:52.448 18:06:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:52.448 18:06:04 env -- env/env.sh@15 -- # uname 00:03:52.448 18:06:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:52.448 18:06:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:52.448 18:06:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.448 18:06:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:52.448 18:06:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.448 18:06:04 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.448 ************************************ 00:03:52.448 START TEST env_dpdk_post_init 00:03:52.448 ************************************ 00:03:52.448 18:06:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.448 EAL: Detected CPU lcores: 48 00:03:52.448 EAL: Detected NUMA nodes: 2 00:03:52.448 EAL: Detected shared linkage of DPDK 00:03:52.448 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:52.448 EAL: Selected IOVA mode 'VA' 00:03:52.448 EAL: VFIO support initialized 00:03:52.448 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:52.707 EAL: Using IOMMU type 1 (Type 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:52.707 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:53.639 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:56.920 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:56.920 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:56.920 Starting DPDK initialization... 00:03:56.920 Starting SPDK post initialization... 00:03:56.920 SPDK NVMe probe 00:03:56.920 Attaching to 0000:88:00.0 00:03:56.920 Attached to 0000:88:00.0 00:03:56.920 Cleaning up... 00:03:56.920 00:03:56.920 real 0m4.586s 00:03:56.920 user 0m3.152s 00:03:56.920 sys 0m0.491s 00:03:56.920 18:06:08 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.920 18:06:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.920 ************************************ 00:03:56.920 END TEST env_dpdk_post_init 00:03:56.920 ************************************ 00:03:56.920 18:06:08 env -- env/env.sh@26 -- # uname 00:03:56.920 18:06:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:56.920 18:06:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.920 18:06:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.920 18:06:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.920 18:06:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.920 ************************************ 00:03:56.920 START TEST env_mem_callbacks 00:03:56.920 ************************************ 00:03:56.920 18:06:08 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.920 EAL: Detected CPU lcores: 48 00:03:56.920 EAL: Detected NUMA nodes: 2 00:03:56.920 EAL: Detected shared linkage of DPDK 00:03:56.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.178 EAL: Selected IOVA mode 'VA' 00:03:57.178 EAL: VFIO support initialized 00:03:57.178 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.178 00:03:57.178 00:03:57.178 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.178 http://cunit.sourceforge.net/ 00:03:57.178 00:03:57.178 00:03:57.178 Suite: memory 00:03:57.178 Test: test ... 00:03:57.178 register 0x200000200000 2097152 00:03:57.178 malloc 3145728 00:03:57.178 register 0x200000400000 4194304 00:03:57.178 buf 0x2000004fffc0 len 3145728 PASSED 00:03:57.178 malloc 64 00:03:57.178 buf 0x2000004ffec0 len 64 PASSED 00:03:57.178 malloc 4194304 00:03:57.178 register 0x200000800000 6291456 00:03:57.178 buf 0x2000009fffc0 len 4194304 PASSED 00:03:57.178 free 0x2000004fffc0 3145728 00:03:57.178 free 0x2000004ffec0 64 00:03:57.178 unregister 0x200000400000 4194304 PASSED 00:03:57.178 free 0x2000009fffc0 4194304 00:03:57.179 unregister 0x200000800000 6291456 PASSED 00:03:57.179 malloc 8388608 00:03:57.179 register 0x200000400000 10485760 00:03:57.179 buf 0x2000005fffc0 len 8388608 PASSED 00:03:57.179 free 0x2000005fffc0 8388608 00:03:57.179 unregister 0x200000400000 10485760 PASSED 00:03:57.179 passed 00:03:57.179 00:03:57.179 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.179 suites 1 1 n/a 0 0 00:03:57.179 tests 1 1 1 0 0 00:03:57.179 asserts 15 15 15 0 n/a 00:03:57.179 00:03:57.179 Elapsed time = 0.060 seconds 00:03:57.179 00:03:57.179 real 0m0.178s 00:03:57.179 user 0m0.100s 00:03:57.179 sys 0m0.077s 00:03:57.179 18:06:09 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.179 18:06:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:57.179 ************************************ 00:03:57.179 END TEST env_mem_callbacks 00:03:57.179 ************************************ 00:03:57.179 00:03:57.179 real 0m13.999s 00:03:57.179 user 0m11.111s 00:03:57.179 sys 0m1.898s 00:03:57.179 18:06:09 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.179 18:06:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.179 ************************************ 00:03:57.179 END TEST env 00:03:57.179 ************************************ 00:03:57.179 18:06:09 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:57.179 18:06:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.179 18:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.179 18:06:09 -- common/autotest_common.sh@10 -- # set +x 00:03:57.179 ************************************ 00:03:57.179 START TEST rpc 00:03:57.179 ************************************ 00:03:57.179 18:06:09 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:57.179 * Looking for test storage... 00:03:57.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.179 18:06:09 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.179 18:06:09 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.179 18:06:09 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.437 18:06:09 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.437 18:06:09 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.437 18:06:09 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.437 18:06:09 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.437 18:06:09 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.437 18:06:09 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.437 18:06:09 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.437 18:06:09 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.437 18:06:09 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.437 18:06:09 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.437 18:06:09 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.437 18:06:09 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.438 18:06:09 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:57.438 18:06:09 rpc -- scripts/common.sh@345 -- # : 1 00:03:57.438 18:06:09 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.438 18:06:09 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.438 18:06:09 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:57.438 18:06:09 rpc -- scripts/common.sh@353 -- # local d=1 00:03:57.438 18:06:09 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.438 18:06:09 rpc -- scripts/common.sh@355 -- # echo 1 00:03:57.438 18:06:09 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.438 18:06:09 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:57.438 18:06:09 rpc -- scripts/common.sh@353 -- # local d=2 00:03:57.438 18:06:09 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.438 18:06:09 rpc -- scripts/common.sh@355 -- # echo 2 00:03:57.438 18:06:09 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.438 18:06:09 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.438 18:06:09 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.438 18:06:09 rpc -- scripts/common.sh@368 -- # return 0 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.438 --rc genhtml_branch_coverage=1 00:03:57.438 --rc genhtml_function_coverage=1 00:03:57.438 --rc genhtml_legend=1 00:03:57.438 --rc geninfo_all_blocks=1 00:03:57.438 --rc geninfo_unexecuted_blocks=1 00:03:57.438 00:03:57.438 ' 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.438 --rc genhtml_branch_coverage=1 00:03:57.438 --rc genhtml_function_coverage=1 00:03:57.438 --rc genhtml_legend=1 00:03:57.438 --rc geninfo_all_blocks=1 00:03:57.438 --rc geninfo_unexecuted_blocks=1 00:03:57.438 00:03:57.438 ' 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.438 --rc genhtml_branch_coverage=1 00:03:57.438 --rc genhtml_function_coverage=1 00:03:57.438 --rc genhtml_legend=1 00:03:57.438 --rc geninfo_all_blocks=1 00:03:57.438 --rc geninfo_unexecuted_blocks=1 00:03:57.438 00:03:57.438 ' 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.438 --rc genhtml_branch_coverage=1 00:03:57.438 --rc genhtml_function_coverage=1 00:03:57.438 --rc genhtml_legend=1 00:03:57.438 --rc geninfo_all_blocks=1 00:03:57.438 --rc geninfo_unexecuted_blocks=1 00:03:57.438 00:03:57.438 ' 00:03:57.438 18:06:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2324095 00:03:57.438 18:06:09 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:57.438 18:06:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.438 18:06:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2324095 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@835 -- # '[' -z 2324095 ']' 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.438 18:06:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.438 [2024-11-28 18:06:09.436851] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:03:57.438 [2024-11-28 18:06:09.437011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324095 ] 00:03:57.696 [2024-11-28 18:06:09.570910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.696 [2024-11-28 18:06:09.689554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:57.696 [2024-11-28 18:06:09.689642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2324095' to capture a snapshot of events at runtime. 00:03:57.696 [2024-11-28 18:06:09.689670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:57.696 [2024-11-28 18:06:09.689691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:57.696 [2024-11-28 18:06:09.689721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2324095 for offline analysis/debug. 00:03:57.696 [2024-11-28 18:06:09.691322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.631 18:06:10 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.631 18:06:10 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:58.631 18:06:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.631 18:06:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.631 18:06:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:58.631 18:06:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:58.631 18:06:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.631 18:06:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.631 18:06:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.631 ************************************ 00:03:58.631 START TEST rpc_integrity 00:03:58.631 ************************************ 00:03:58.631 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:58.631 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.631 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.632 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.632 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.632 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.632 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.890 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.890 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.890 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.890 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.890 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.890 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:58.890 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.890 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.890 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.890 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.890 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.890 { 00:03:58.890 "name": "Malloc0", 00:03:58.890 "aliases": [ 00:03:58.890 "b566e5b8-9503-437d-adac-0cf0026fcb3e" 00:03:58.890 ], 00:03:58.890 "product_name": "Malloc disk", 00:03:58.890 "block_size": 512, 00:03:58.890 "num_blocks": 16384, 00:03:58.890 "uuid": "b566e5b8-9503-437d-adac-0cf0026fcb3e", 00:03:58.890 "assigned_rate_limits": { 00:03:58.890 "rw_ios_per_sec": 0, 00:03:58.891 "rw_mbytes_per_sec": 0, 00:03:58.891 "r_mbytes_per_sec": 0, 00:03:58.891 "w_mbytes_per_sec": 0 00:03:58.891 }, 00:03:58.891 "claimed": false, 00:03:58.891 "zoned": false, 00:03:58.891 "supported_io_types": { 00:03:58.891 "read": true, 00:03:58.891 "write": true, 00:03:58.891 "unmap": true, 00:03:58.891 "flush": true, 00:03:58.891 "reset": true, 00:03:58.891 "nvme_admin": false, 00:03:58.891 "nvme_io": false, 00:03:58.891 "nvme_io_md": false, 00:03:58.891 "write_zeroes": true, 00:03:58.891 "zcopy": true, 00:03:58.891 "get_zone_info": false, 00:03:58.891 "zone_management": false, 00:03:58.891 "zone_append": false, 00:03:58.891 "compare": false, 00:03:58.891 "compare_and_write": false, 00:03:58.891 "abort": true, 00:03:58.891 "seek_hole": false, 00:03:58.891 "seek_data": false, 00:03:58.891 "copy": true, 00:03:58.891 "nvme_iov_md": false 00:03:58.891 }, 00:03:58.891 "memory_domains": [ 00:03:58.891 { 00:03:58.891 "dma_device_id": "system", 00:03:58.891 "dma_device_type": 1 00:03:58.891 }, 00:03:58.891 { 00:03:58.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.891 "dma_device_type": 2 00:03:58.891 } 00:03:58.891 ], 00:03:58.891 "driver_specific": {} 00:03:58.891 } 00:03:58.891 ]' 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.891 [2024-11-28 18:06:10.806543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:58.891 [2024-11-28 18:06:10.806611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.891 [2024-11-28 18:06:10.806656] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:03:58.891 [2024-11-28 18:06:10.806681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.891 [2024-11-28 18:06:10.809523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.891 [2024-11-28 18:06:10.809562] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.891 Passthru0 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.891 { 00:03:58.891 "name": "Malloc0", 00:03:58.891 "aliases": [ 00:03:58.891 "b566e5b8-9503-437d-adac-0cf0026fcb3e" 00:03:58.891 ], 00:03:58.891 "product_name": "Malloc disk", 00:03:58.891 "block_size": 512, 00:03:58.891 "num_blocks": 16384, 00:03:58.891 "uuid": "b566e5b8-9503-437d-adac-0cf0026fcb3e", 00:03:58.891 "assigned_rate_limits": { 00:03:58.891 "rw_ios_per_sec": 0, 00:03:58.891 "rw_mbytes_per_sec": 0, 00:03:58.891 "r_mbytes_per_sec": 0, 00:03:58.891 "w_mbytes_per_sec": 0 00:03:58.891 }, 00:03:58.891 "claimed": true, 00:03:58.891 "claim_type": "exclusive_write", 00:03:58.891 "zoned": false, 00:03:58.891 "supported_io_types": { 00:03:58.891 "read": true, 00:03:58.891 "write": true, 00:03:58.891 "unmap": true, 00:03:58.891 "flush": true, 00:03:58.891 "reset": true, 00:03:58.891 "nvme_admin": false, 00:03:58.891 "nvme_io": false, 00:03:58.891 "nvme_io_md": false, 00:03:58.891 "write_zeroes": true, 00:03:58.891 "zcopy": true, 00:03:58.891 "get_zone_info": false, 00:03:58.891 "zone_management": false, 00:03:58.891 "zone_append": false, 00:03:58.891 "compare": false, 00:03:58.891 "compare_and_write": false, 00:03:58.891 "abort": true, 00:03:58.891 "seek_hole": false, 00:03:58.891 "seek_data": false, 00:03:58.891 "copy": true, 00:03:58.891 "nvme_iov_md": false 00:03:58.891 }, 00:03:58.891 "memory_domains": [ 00:03:58.891 { 00:03:58.891 "dma_device_id": "system", 00:03:58.891 "dma_device_type": 1 00:03:58.891 }, 00:03:58.891 { 00:03:58.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.891 "dma_device_type": 2 00:03:58.891 } 00:03:58.891 ], 00:03:58.891 "driver_specific": {} 00:03:58.891 }, 00:03:58.891 { 00:03:58.891 "name": "Passthru0", 00:03:58.891 "aliases": [ 00:03:58.891 "0566fe00-aea1-5204-b20c-2c6982b849cc" 00:03:58.891 ], 00:03:58.891 "product_name": "passthru", 00:03:58.891 "block_size": 512, 00:03:58.891 "num_blocks": 16384, 00:03:58.891 "uuid": "0566fe00-aea1-5204-b20c-2c6982b849cc", 00:03:58.891 "assigned_rate_limits": { 00:03:58.891 "rw_ios_per_sec": 0, 00:03:58.891 "rw_mbytes_per_sec": 0, 00:03:58.891 "r_mbytes_per_sec": 0, 00:03:58.891 "w_mbytes_per_sec": 0 00:03:58.891 }, 00:03:58.891 "claimed": false, 00:03:58.891 "zoned": false, 00:03:58.891 "supported_io_types": { 00:03:58.891 "read": true, 00:03:58.891 "write": true, 00:03:58.891 "unmap": true, 00:03:58.891 "flush": true, 00:03:58.891 "reset": true, 00:03:58.891 "nvme_admin": false, 00:03:58.891 "nvme_io": false, 00:03:58.891 "nvme_io_md": false, 00:03:58.891 "write_zeroes": true, 00:03:58.891 "zcopy": true, 00:03:58.891 "get_zone_info": false, 00:03:58.891 "zone_management": false, 00:03:58.891 "zone_append": false, 00:03:58.891 "compare": false, 00:03:58.891 "compare_and_write": false, 00:03:58.891 "abort": true, 00:03:58.891 "seek_hole": false, 00:03:58.891 "seek_data": false, 00:03:58.891 "copy": true, 00:03:58.891 "nvme_iov_md": false 00:03:58.891 }, 00:03:58.891 "memory_domains": [ 00:03:58.891 { 00:03:58.891 "dma_device_id": "system", 00:03:58.891 "dma_device_type": 1 00:03:58.891 }, 00:03:58.891 { 00:03:58.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.891 "dma_device_type": 2 00:03:58.891 } 00:03:58.891 ], 00:03:58.891 "driver_specific": { 00:03:58.891 "passthru": { 00:03:58.891 "name": "Passthru0", 00:03:58.891 "base_bdev_name": "Malloc0" 00:03:58.891 } 00:03:58.891 } 00:03:58.891 } 00:03:58.891 ]' 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.891 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.891 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.892 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.892 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.892 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.892 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.892 18:06:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.892 00:03:58.892 real 0m0.251s 00:03:58.892 user 0m0.148s 00:03:58.892 sys 0m0.019s 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.892 18:06:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.892 ************************************ 00:03:58.892 END TEST rpc_integrity 00:03:58.892 ************************************ 00:03:58.892 18:06:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:58.892 18:06:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.892 18:06:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.892 18:06:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.150 ************************************ 00:03:59.150 START TEST rpc_plugins 00:03:59.150 ************************************ 00:03:59.150 18:06:10 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:59.150 18:06:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:59.150 18:06:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.150 18:06:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.150 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.150 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:59.150 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:59.150 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.150 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.150 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.150 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:59.150 { 00:03:59.150 "name": "Malloc1", 00:03:59.150 "aliases": [ 00:03:59.150 "d142ac78-2a69-4b9d-8f00-d65a089d2fcc" 00:03:59.150 ], 00:03:59.150 "product_name": "Malloc disk", 00:03:59.150 "block_size": 4096, 00:03:59.150 "num_blocks": 256, 00:03:59.150 "uuid": "d142ac78-2a69-4b9d-8f00-d65a089d2fcc", 00:03:59.150 "assigned_rate_limits": { 00:03:59.150 "rw_ios_per_sec": 0, 00:03:59.150 "rw_mbytes_per_sec": 0, 00:03:59.150 "r_mbytes_per_sec": 0, 00:03:59.150 "w_mbytes_per_sec": 0 00:03:59.150 }, 00:03:59.150 "claimed": false, 00:03:59.150 "zoned": false, 00:03:59.150 "supported_io_types": { 00:03:59.150 "read": true, 00:03:59.150 "write": true, 00:03:59.150 "unmap": true, 00:03:59.150 "flush": true, 00:03:59.150 "reset": true, 00:03:59.150 "nvme_admin": false, 00:03:59.150 "nvme_io": false, 00:03:59.150 "nvme_io_md": false, 00:03:59.150 "write_zeroes": true, 00:03:59.150 "zcopy": true, 00:03:59.150 "get_zone_info": false, 00:03:59.150 "zone_management": false, 00:03:59.150 "zone_append": false, 00:03:59.150 "compare": false, 00:03:59.150 "compare_and_write": false, 00:03:59.150 "abort": true, 00:03:59.150 "seek_hole": false, 00:03:59.150 "seek_data": false, 00:03:59.150 "copy": true, 00:03:59.150 "nvme_iov_md": false 00:03:59.150 }, 00:03:59.151 "memory_domains": [ 00:03:59.151 { 00:03:59.151 "dma_device_id": "system", 00:03:59.151 "dma_device_type": 1 00:03:59.151 }, 00:03:59.151 { 00:03:59.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.151 "dma_device_type": 2 00:03:59.151 } 00:03:59.151 ], 00:03:59.151 "driver_specific": {} 00:03:59.151 } 00:03:59.151 ]' 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:59.151 18:06:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:59.151 00:03:59.151 real 0m0.121s 00:03:59.151 user 0m0.070s 00:03:59.151 sys 0m0.017s 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.151 18:06:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 ************************************ 00:03:59.151 END TEST rpc_plugins 00:03:59.151 ************************************ 00:03:59.151 18:06:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:59.151 18:06:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.151 18:06:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.151 18:06:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 ************************************ 00:03:59.151 START TEST rpc_trace_cmd_test 00:03:59.151 ************************************ 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:59.151 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2324095", 00:03:59.151 "tpoint_group_mask": "0x8", 00:03:59.151 "iscsi_conn": { 00:03:59.151 "mask": "0x2", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "scsi": { 00:03:59.151 "mask": "0x4", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "bdev": { 00:03:59.151 "mask": "0x8", 00:03:59.151 "tpoint_mask": "0xffffffffffffffff" 00:03:59.151 }, 00:03:59.151 "nvmf_rdma": { 00:03:59.151 "mask": "0x10", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "nvmf_tcp": { 00:03:59.151 "mask": "0x20", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "ftl": { 00:03:59.151 "mask": "0x40", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "blobfs": { 00:03:59.151 "mask": "0x80", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "dsa": { 00:03:59.151 "mask": "0x200", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "thread": { 00:03:59.151 "mask": "0x400", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "nvme_pcie": { 00:03:59.151 "mask": "0x800", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "iaa": { 00:03:59.151 "mask": "0x1000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "nvme_tcp": { 00:03:59.151 "mask": "0x2000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "bdev_nvme": { 00:03:59.151 "mask": "0x4000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "sock": { 00:03:59.151 "mask": "0x8000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "blob": { 00:03:59.151 "mask": "0x10000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "bdev_raid": { 00:03:59.151 "mask": "0x20000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 }, 00:03:59.151 "scheduler": { 00:03:59.151 "mask": "0x40000", 00:03:59.151 "tpoint_mask": "0x0" 00:03:59.151 } 00:03:59.151 }' 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:59.151 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:59.409 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:59.409 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:59.409 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:59.409 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:59.409 18:06:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:59.410 00:03:59.410 real 0m0.202s 00:03:59.410 user 0m0.178s 00:03:59.410 sys 0m0.017s 00:03:59.410 18:06:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.410 18:06:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:59.410 ************************************ 00:03:59.410 END TEST rpc_trace_cmd_test 00:03:59.410 ************************************ 00:03:59.410 18:06:11 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:59.410 18:06:11 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:59.410 18:06:11 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:59.410 18:06:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.410 18:06:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.410 18:06:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.410 ************************************ 00:03:59.410 START TEST rpc_daemon_integrity 00:03:59.410 ************************************ 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:59.410 { 00:03:59.410 "name": "Malloc2", 00:03:59.410 "aliases": [ 00:03:59.410 "8ced4b0f-fee4-4eeb-a53f-379e74c7a649" 00:03:59.410 ], 00:03:59.410 "product_name": "Malloc disk", 00:03:59.410 "block_size": 512, 00:03:59.410 "num_blocks": 16384, 00:03:59.410 "uuid": "8ced4b0f-fee4-4eeb-a53f-379e74c7a649", 00:03:59.410 "assigned_rate_limits": { 00:03:59.410 "rw_ios_per_sec": 0, 00:03:59.410 "rw_mbytes_per_sec": 0, 00:03:59.410 "r_mbytes_per_sec": 0, 00:03:59.410 "w_mbytes_per_sec": 0 00:03:59.410 }, 00:03:59.410 "claimed": false, 00:03:59.410 "zoned": false, 00:03:59.410 "supported_io_types": { 00:03:59.410 "read": true, 00:03:59.410 "write": true, 00:03:59.410 "unmap": true, 00:03:59.410 "flush": true, 00:03:59.410 "reset": true, 00:03:59.410 "nvme_admin": false, 00:03:59.410 "nvme_io": false, 00:03:59.410 "nvme_io_md": false, 00:03:59.410 "write_zeroes": true, 00:03:59.410 "zcopy": true, 00:03:59.410 "get_zone_info": false, 00:03:59.410 "zone_management": false, 00:03:59.410 "zone_append": false, 00:03:59.410 "compare": false, 00:03:59.410 "compare_and_write": false, 00:03:59.410 "abort": true, 00:03:59.410 "seek_hole": false, 00:03:59.410 "seek_data": false, 00:03:59.410 "copy": true, 00:03:59.410 "nvme_iov_md": false 00:03:59.410 }, 00:03:59.410 "memory_domains": [ 00:03:59.410 { 00:03:59.410 "dma_device_id": "system", 00:03:59.410 "dma_device_type": 1 00:03:59.410 }, 00:03:59.410 { 00:03:59.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.410 "dma_device_type": 2 00:03:59.410 } 00:03:59.410 ], 00:03:59.410 "driver_specific": {} 00:03:59.410 } 00:03:59.410 ]' 00:03:59.410 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.668 [2024-11-28 18:06:11.535575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:59.668 [2024-11-28 18:06:11.535636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:59.668 [2024-11-28 18:06:11.535680] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:03:59.668 [2024-11-28 18:06:11.535704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:59.668 [2024-11-28 18:06:11.538468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:59.668 [2024-11-28 18:06:11.538505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:59.668 Passthru0 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.668 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:59.668 { 00:03:59.668 "name": "Malloc2", 00:03:59.668 "aliases": [ 00:03:59.668 "8ced4b0f-fee4-4eeb-a53f-379e74c7a649" 00:03:59.668 ], 00:03:59.668 "product_name": "Malloc disk", 00:03:59.668 "block_size": 512, 00:03:59.668 "num_blocks": 16384, 00:03:59.668 "uuid": "8ced4b0f-fee4-4eeb-a53f-379e74c7a649", 00:03:59.668 "assigned_rate_limits": { 00:03:59.668 "rw_ios_per_sec": 0, 00:03:59.668 "rw_mbytes_per_sec": 0, 00:03:59.668 "r_mbytes_per_sec": 0, 00:03:59.668 "w_mbytes_per_sec": 0 00:03:59.668 }, 00:03:59.668 "claimed": true, 00:03:59.668 "claim_type": "exclusive_write", 00:03:59.668 "zoned": false, 00:03:59.668 "supported_io_types": { 00:03:59.668 "read": true, 00:03:59.668 "write": true, 00:03:59.668 "unmap": true, 00:03:59.668 "flush": true, 00:03:59.668 "reset": true, 00:03:59.668 "nvme_admin": false, 00:03:59.668 "nvme_io": false, 00:03:59.668 "nvme_io_md": false, 00:03:59.669 "write_zeroes": true, 00:03:59.669 "zcopy": true, 00:03:59.669 "get_zone_info": false, 00:03:59.669 "zone_management": false, 00:03:59.669 "zone_append": false, 00:03:59.669 "compare": false, 00:03:59.669 "compare_and_write": false, 00:03:59.669 "abort": true, 00:03:59.669 "seek_hole": false, 00:03:59.669 "seek_data": false, 00:03:59.669 "copy": true, 00:03:59.669 "nvme_iov_md": false 00:03:59.669 }, 00:03:59.669 "memory_domains": [ 00:03:59.669 { 00:03:59.669 "dma_device_id": "system", 00:03:59.669 "dma_device_type": 1 00:03:59.669 }, 00:03:59.669 { 00:03:59.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.669 "dma_device_type": 2 00:03:59.669 } 00:03:59.669 ], 00:03:59.669 "driver_specific": {} 00:03:59.669 }, 00:03:59.669 { 00:03:59.669 "name": "Passthru0", 00:03:59.669 "aliases": [ 00:03:59.669 "086da66e-940b-51a4-a4af-c92cf27c185b" 00:03:59.669 ], 00:03:59.669 "product_name": "passthru", 00:03:59.669 "block_size": 512, 00:03:59.669 "num_blocks": 16384, 00:03:59.669 "uuid": "086da66e-940b-51a4-a4af-c92cf27c185b", 00:03:59.669 "assigned_rate_limits": { 00:03:59.669 "rw_ios_per_sec": 0, 00:03:59.669 "rw_mbytes_per_sec": 0, 00:03:59.669 "r_mbytes_per_sec": 0, 00:03:59.669 "w_mbytes_per_sec": 0 00:03:59.669 }, 00:03:59.669 "claimed": false, 00:03:59.669 "zoned": false, 00:03:59.669 "supported_io_types": { 00:03:59.669 "read": true, 00:03:59.669 "write": true, 00:03:59.669 "unmap": true, 00:03:59.669 "flush": true, 00:03:59.669 "reset": true, 00:03:59.669 "nvme_admin": false, 00:03:59.669 "nvme_io": false, 00:03:59.669 "nvme_io_md": false, 00:03:59.669 "write_zeroes": true, 00:03:59.669 "zcopy": true, 00:03:59.669 "get_zone_info": false, 00:03:59.669 "zone_management": false, 00:03:59.669 "zone_append": false, 00:03:59.669 "compare": false, 00:03:59.669 "compare_and_write": false, 00:03:59.669 "abort": true, 00:03:59.669 "seek_hole": false, 00:03:59.669 "seek_data": false, 00:03:59.669 "copy": true, 00:03:59.669 "nvme_iov_md": false 00:03:59.669 }, 00:03:59.669 "memory_domains": [ 00:03:59.669 { 00:03:59.669 "dma_device_id": "system", 00:03:59.669 "dma_device_type": 1 00:03:59.669 }, 00:03:59.669 { 00:03:59.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:59.669 "dma_device_type": 2 00:03:59.669 } 00:03:59.669 ], 00:03:59.669 "driver_specific": { 00:03:59.669 "passthru": { 00:03:59.669 "name": "Passthru0", 00:03:59.669 "base_bdev_name": "Malloc2" 00:03:59.669 } 00:03:59.669 } 00:03:59.669 } 00:03:59.669 ]' 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:59.669 00:03:59.669 real 0m0.271s 00:03:59.669 user 0m0.157s 00:03:59.669 sys 0m0.027s 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.669 18:06:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:59.669 ************************************ 00:03:59.669 END TEST rpc_daemon_integrity 00:03:59.669 ************************************ 00:03:59.669 18:06:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:59.669 18:06:11 rpc -- rpc/rpc.sh@84 -- # killprocess 2324095 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 2324095 ']' 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@958 -- # kill -0 2324095 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@959 -- # uname 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324095 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324095' 00:03:59.669 killing process with pid 2324095 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@973 -- # kill 2324095 00:03:59.669 18:06:11 rpc -- common/autotest_common.sh@978 -- # wait 2324095 00:04:02.200 00:04:02.200 real 0m4.978s 00:04:02.200 user 0m5.574s 00:04:02.200 sys 0m0.831s 00:04:02.200 18:06:14 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.200 18:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.200 ************************************ 00:04:02.200 END TEST rpc 00:04:02.200 ************************************ 00:04:02.200 18:06:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.200 18:06:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.200 18:06:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.200 18:06:14 -- common/autotest_common.sh@10 -- # set +x 00:04:02.200 ************************************ 00:04:02.200 START TEST skip_rpc 00:04:02.200 ************************************ 00:04:02.200 18:06:14 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.200 * Looking for test storage... 00:04:02.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.200 18:06:14 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.200 18:06:14 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.200 18:06:14 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.458 18:06:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.458 --rc genhtml_branch_coverage=1 00:04:02.458 --rc genhtml_function_coverage=1 00:04:02.458 --rc genhtml_legend=1 00:04:02.458 --rc geninfo_all_blocks=1 00:04:02.458 --rc geninfo_unexecuted_blocks=1 00:04:02.458 00:04:02.458 ' 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.458 --rc genhtml_branch_coverage=1 00:04:02.458 --rc genhtml_function_coverage=1 00:04:02.458 --rc genhtml_legend=1 00:04:02.458 --rc geninfo_all_blocks=1 00:04:02.458 --rc geninfo_unexecuted_blocks=1 00:04:02.458 00:04:02.458 ' 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.458 --rc genhtml_branch_coverage=1 00:04:02.458 --rc genhtml_function_coverage=1 00:04:02.458 --rc genhtml_legend=1 00:04:02.458 --rc geninfo_all_blocks=1 00:04:02.458 --rc geninfo_unexecuted_blocks=1 00:04:02.458 00:04:02.458 ' 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.458 --rc genhtml_branch_coverage=1 00:04:02.458 --rc genhtml_function_coverage=1 00:04:02.458 --rc genhtml_legend=1 00:04:02.458 --rc geninfo_all_blocks=1 00:04:02.458 --rc geninfo_unexecuted_blocks=1 00:04:02.458 00:04:02.458 ' 00:04:02.458 18:06:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.458 18:06:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.458 18:06:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.458 18:06:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.458 ************************************ 00:04:02.458 START TEST skip_rpc 00:04:02.458 ************************************ 00:04:02.458 18:06:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:02.458 18:06:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2324814 00:04:02.458 18:06:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:02.458 18:06:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.458 18:06:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:02.458 [2024-11-28 18:06:14.492784] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:02.458 [2024-11-28 18:06:14.492933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2324814 ] 00:04:02.717 [2024-11-28 18:06:14.635895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.717 [2024-11-28 18:06:14.774724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2324814 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2324814 ']' 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2324814 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2324814 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2324814' 00:04:08.044 killing process with pid 2324814 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2324814 00:04:08.044 18:06:19 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2324814 00:04:09.946 00:04:09.946 real 0m7.500s 00:04:09.946 user 0m7.004s 00:04:09.946 sys 0m0.494s 00:04:09.946 18:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.946 18:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.946 ************************************ 00:04:09.946 END TEST skip_rpc 00:04:09.946 ************************************ 00:04:09.946 18:06:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:09.946 18:06:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.946 18:06:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.946 18:06:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.946 ************************************ 00:04:09.946 START TEST skip_rpc_with_json 00:04:09.946 ************************************ 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2325769 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2325769 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2325769 ']' 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.946 18:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.946 [2024-11-28 18:06:22.041319] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:09.946 [2024-11-28 18:06:22.041481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2325769 ] 00:04:10.204 [2024-11-28 18:06:22.175520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.463 [2024-11-28 18:06:22.305834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.398 [2024-11-28 18:06:23.239336] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:11.398 request: 00:04:11.398 { 00:04:11.398 "trtype": "tcp", 00:04:11.398 "method": "nvmf_get_transports", 00:04:11.398 "req_id": 1 00:04:11.398 } 00:04:11.398 Got JSON-RPC error response 00:04:11.398 response: 00:04:11.398 { 00:04:11.398 "code": -19, 00:04:11.398 "message": "No such device" 00:04:11.398 } 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.398 [2024-11-28 18:06:23.247512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.398 18:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:11.398 { 00:04:11.398 "subsystems": [ 00:04:11.398 { 00:04:11.398 "subsystem": "fsdev", 00:04:11.398 "config": [ 00:04:11.398 { 00:04:11.398 "method": "fsdev_set_opts", 00:04:11.398 "params": { 00:04:11.398 "fsdev_io_pool_size": 65535, 00:04:11.398 "fsdev_io_cache_size": 256 00:04:11.398 } 00:04:11.398 } 00:04:11.398 ] 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "subsystem": "keyring", 00:04:11.398 "config": [] 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "subsystem": "iobuf", 00:04:11.398 "config": [ 00:04:11.398 { 00:04:11.398 "method": "iobuf_set_options", 00:04:11.398 "params": { 00:04:11.398 "small_pool_count": 8192, 00:04:11.398 "large_pool_count": 1024, 00:04:11.398 "small_bufsize": 8192, 00:04:11.398 "large_bufsize": 135168, 00:04:11.398 "enable_numa": false 00:04:11.398 } 00:04:11.398 } 00:04:11.398 ] 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "subsystem": "sock", 00:04:11.398 "config": [ 00:04:11.398 { 00:04:11.398 "method": "sock_set_default_impl", 00:04:11.398 "params": { 00:04:11.398 "impl_name": "posix" 00:04:11.398 } 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "method": "sock_impl_set_options", 00:04:11.398 "params": { 00:04:11.398 "impl_name": "ssl", 00:04:11.398 "recv_buf_size": 4096, 00:04:11.398 "send_buf_size": 4096, 00:04:11.398 "enable_recv_pipe": true, 00:04:11.398 "enable_quickack": false, 00:04:11.398 "enable_placement_id": 0, 00:04:11.398 "enable_zerocopy_send_server": true, 00:04:11.398 "enable_zerocopy_send_client": false, 00:04:11.398 "zerocopy_threshold": 0, 00:04:11.398 "tls_version": 0, 00:04:11.398 "enable_ktls": false 00:04:11.398 } 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "method": "sock_impl_set_options", 00:04:11.398 "params": { 00:04:11.398 "impl_name": "posix", 00:04:11.398 "recv_buf_size": 2097152, 00:04:11.398 "send_buf_size": 2097152, 00:04:11.398 "enable_recv_pipe": true, 00:04:11.398 "enable_quickack": false, 00:04:11.398 "enable_placement_id": 0, 00:04:11.398 "enable_zerocopy_send_server": true, 00:04:11.398 "enable_zerocopy_send_client": false, 00:04:11.398 "zerocopy_threshold": 0, 00:04:11.398 "tls_version": 0, 00:04:11.398 "enable_ktls": false 00:04:11.398 } 00:04:11.398 } 00:04:11.398 ] 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "subsystem": "vmd", 00:04:11.398 "config": [] 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "subsystem": "accel", 00:04:11.398 "config": [ 00:04:11.398 { 00:04:11.398 "method": "accel_set_options", 00:04:11.398 "params": { 00:04:11.398 "small_cache_size": 128, 00:04:11.398 "large_cache_size": 16, 00:04:11.398 "task_count": 2048, 00:04:11.398 "sequence_count": 2048, 00:04:11.398 "buf_count": 2048 00:04:11.398 } 00:04:11.398 } 00:04:11.398 ] 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "subsystem": "bdev", 00:04:11.398 "config": [ 00:04:11.398 { 00:04:11.398 "method": "bdev_set_options", 00:04:11.398 "params": { 00:04:11.398 "bdev_io_pool_size": 65535, 00:04:11.398 "bdev_io_cache_size": 256, 00:04:11.398 "bdev_auto_examine": true, 00:04:11.398 "iobuf_small_cache_size": 128, 00:04:11.398 "iobuf_large_cache_size": 16 00:04:11.398 } 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "method": "bdev_raid_set_options", 00:04:11.398 "params": { 00:04:11.398 "process_window_size_kb": 1024, 00:04:11.398 "process_max_bandwidth_mb_sec": 0 00:04:11.398 } 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "method": "bdev_iscsi_set_options", 00:04:11.398 "params": { 00:04:11.398 "timeout_sec": 30 00:04:11.398 } 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "method": "bdev_nvme_set_options", 00:04:11.398 "params": { 00:04:11.398 "action_on_timeout": "none", 00:04:11.398 "timeout_us": 0, 00:04:11.398 "timeout_admin_us": 0, 00:04:11.398 "keep_alive_timeout_ms": 10000, 00:04:11.398 "arbitration_burst": 0, 00:04:11.398 "low_priority_weight": 0, 00:04:11.398 "medium_priority_weight": 0, 00:04:11.398 "high_priority_weight": 0, 00:04:11.398 "nvme_adminq_poll_period_us": 10000, 00:04:11.398 "nvme_ioq_poll_period_us": 0, 00:04:11.398 "io_queue_requests": 0, 00:04:11.398 "delay_cmd_submit": true, 00:04:11.398 "transport_retry_count": 4, 00:04:11.398 "bdev_retry_count": 3, 00:04:11.398 "transport_ack_timeout": 0, 00:04:11.398 "ctrlr_loss_timeout_sec": 0, 00:04:11.398 "reconnect_delay_sec": 0, 00:04:11.398 "fast_io_fail_timeout_sec": 0, 00:04:11.398 "disable_auto_failback": false, 00:04:11.398 "generate_uuids": false, 00:04:11.398 "transport_tos": 0, 00:04:11.398 "nvme_error_stat": false, 00:04:11.398 "rdma_srq_size": 0, 00:04:11.398 "io_path_stat": false, 00:04:11.398 "allow_accel_sequence": false, 00:04:11.398 "rdma_max_cq_size": 0, 00:04:11.398 "rdma_cm_event_timeout_ms": 0, 00:04:11.398 "dhchap_digests": [ 00:04:11.398 "sha256", 00:04:11.398 "sha384", 00:04:11.398 "sha512" 00:04:11.398 ], 00:04:11.398 "dhchap_dhgroups": [ 00:04:11.398 "null", 00:04:11.398 "ffdhe2048", 00:04:11.398 "ffdhe3072", 00:04:11.398 "ffdhe4096", 00:04:11.398 "ffdhe6144", 00:04:11.398 "ffdhe8192" 00:04:11.398 ] 00:04:11.398 } 00:04:11.398 }, 00:04:11.398 { 00:04:11.398 "method": "bdev_nvme_set_hotplug", 00:04:11.398 "params": { 00:04:11.398 "period_us": 100000, 00:04:11.399 "enable": false 00:04:11.399 } 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "method": "bdev_wait_for_examine" 00:04:11.399 } 00:04:11.399 ] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "scsi", 00:04:11.399 "config": null 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "scheduler", 00:04:11.399 "config": [ 00:04:11.399 { 00:04:11.399 "method": "framework_set_scheduler", 00:04:11.399 "params": { 00:04:11.399 "name": "static" 00:04:11.399 } 00:04:11.399 } 00:04:11.399 ] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "vhost_scsi", 00:04:11.399 "config": [] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "vhost_blk", 00:04:11.399 "config": [] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "ublk", 00:04:11.399 "config": [] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "nbd", 00:04:11.399 "config": [] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "nvmf", 00:04:11.399 "config": [ 00:04:11.399 { 00:04:11.399 "method": "nvmf_set_config", 00:04:11.399 "params": { 00:04:11.399 "discovery_filter": "match_any", 00:04:11.399 "admin_cmd_passthru": { 00:04:11.399 "identify_ctrlr": false 00:04:11.399 }, 00:04:11.399 "dhchap_digests": [ 00:04:11.399 "sha256", 00:04:11.399 "sha384", 00:04:11.399 "sha512" 00:04:11.399 ], 00:04:11.399 "dhchap_dhgroups": [ 00:04:11.399 "null", 00:04:11.399 "ffdhe2048", 00:04:11.399 "ffdhe3072", 00:04:11.399 "ffdhe4096", 00:04:11.399 "ffdhe6144", 00:04:11.399 "ffdhe8192" 00:04:11.399 ] 00:04:11.399 } 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "method": "nvmf_set_max_subsystems", 00:04:11.399 "params": { 00:04:11.399 "max_subsystems": 1024 00:04:11.399 } 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "method": "nvmf_set_crdt", 00:04:11.399 "params": { 00:04:11.399 "crdt1": 0, 00:04:11.399 "crdt2": 0, 00:04:11.399 "crdt3": 0 00:04:11.399 } 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "method": "nvmf_create_transport", 00:04:11.399 "params": { 00:04:11.399 "trtype": "TCP", 00:04:11.399 "max_queue_depth": 128, 00:04:11.399 "max_io_qpairs_per_ctrlr": 127, 00:04:11.399 "in_capsule_data_size": 4096, 00:04:11.399 "max_io_size": 131072, 00:04:11.399 "io_unit_size": 131072, 00:04:11.399 "max_aq_depth": 128, 00:04:11.399 "num_shared_buffers": 511, 00:04:11.399 "buf_cache_size": 4294967295, 00:04:11.399 "dif_insert_or_strip": false, 00:04:11.399 "zcopy": false, 00:04:11.399 "c2h_success": true, 00:04:11.399 "sock_priority": 0, 00:04:11.399 "abort_timeout_sec": 1, 00:04:11.399 "ack_timeout": 0, 00:04:11.399 "data_wr_pool_size": 0 00:04:11.399 } 00:04:11.399 } 00:04:11.399 ] 00:04:11.399 }, 00:04:11.399 { 00:04:11.399 "subsystem": "iscsi", 00:04:11.399 "config": [ 00:04:11.399 { 00:04:11.399 "method": "iscsi_set_options", 00:04:11.399 "params": { 00:04:11.399 "node_base": "iqn.2016-06.io.spdk", 00:04:11.399 "max_sessions": 128, 00:04:11.399 "max_connections_per_session": 2, 00:04:11.399 "max_queue_depth": 64, 00:04:11.399 "default_time2wait": 2, 00:04:11.399 "default_time2retain": 20, 00:04:11.399 "first_burst_length": 8192, 00:04:11.399 "immediate_data": true, 00:04:11.399 "allow_duplicated_isid": false, 00:04:11.399 "error_recovery_level": 0, 00:04:11.399 "nop_timeout": 60, 00:04:11.399 "nop_in_interval": 30, 00:04:11.399 "disable_chap": false, 00:04:11.399 "require_chap": false, 00:04:11.399 "mutual_chap": false, 00:04:11.399 "chap_group": 0, 00:04:11.399 "max_large_datain_per_connection": 64, 00:04:11.399 "max_r2t_per_connection": 4, 00:04:11.399 "pdu_pool_size": 36864, 00:04:11.399 "immediate_data_pool_size": 16384, 00:04:11.399 "data_out_pool_size": 2048 00:04:11.399 } 00:04:11.399 } 00:04:11.399 ] 00:04:11.399 } 00:04:11.399 ] 00:04:11.399 } 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2325769 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2325769 ']' 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2325769 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2325769 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2325769' 00:04:11.399 killing process with pid 2325769 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2325769 00:04:11.399 18:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2325769 00:04:13.928 18:06:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2326187 00:04:13.928 18:06:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.928 18:06:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2326187 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2326187 ']' 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2326187 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2326187 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2326187' 00:04:19.196 killing process with pid 2326187 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2326187 00:04:19.196 18:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2326187 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:21.726 00:04:21.726 real 0m11.432s 00:04:21.726 user 0m10.969s 00:04:21.726 sys 0m1.038s 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.726 ************************************ 00:04:21.726 END TEST skip_rpc_with_json 00:04:21.726 ************************************ 00:04:21.726 18:06:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.726 18:06:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.726 18:06:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.726 18:06:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.726 ************************************ 00:04:21.726 START TEST skip_rpc_with_delay 00:04:21.726 ************************************ 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.726 [2024-11-28 18:06:33.517180] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.726 00:04:21.726 real 0m0.150s 00:04:21.726 user 0m0.077s 00:04:21.726 sys 0m0.072s 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.726 18:06:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:21.726 ************************************ 00:04:21.726 END TEST skip_rpc_with_delay 00:04:21.726 ************************************ 00:04:21.726 18:06:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.726 18:06:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.726 18:06:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.726 18:06:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.727 18:06:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.727 18:06:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.727 ************************************ 00:04:21.727 START TEST exit_on_failed_rpc_init 00:04:21.727 ************************************ 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2327171 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2327171 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2327171 ']' 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.727 18:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.727 [2024-11-28 18:06:33.719851] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:21.727 [2024-11-28 18:06:33.720002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327171 ] 00:04:21.985 [2024-11-28 18:06:33.862786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.985 [2024-11-28 18:06:34.000282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.920 18:06:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.179 [2024-11-28 18:06:35.058629] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:23.179 [2024-11-28 18:06:35.058786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327310 ] 00:04:23.179 [2024-11-28 18:06:35.196980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.436 [2024-11-28 18:06:35.335855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.436 [2024-11-28 18:06:35.336001] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.436 [2024-11-28 18:06:35.336044] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.436 [2024-11-28 18:06:35.336067] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2327171 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2327171 ']' 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2327171 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327171 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327171' 00:04:23.695 killing process with pid 2327171 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2327171 00:04:23.695 18:06:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2327171 00:04:26.226 00:04:26.226 real 0m4.489s 00:04:26.226 user 0m4.944s 00:04:26.226 sys 0m0.744s 00:04:26.226 18:06:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.226 18:06:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.226 ************************************ 00:04:26.226 END TEST exit_on_failed_rpc_init 00:04:26.226 ************************************ 00:04:26.226 18:06:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:26.226 00:04:26.226 real 0m23.903s 00:04:26.226 user 0m23.160s 00:04:26.226 sys 0m2.535s 00:04:26.226 18:06:38 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.226 18:06:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.226 ************************************ 00:04:26.226 END TEST skip_rpc 00:04:26.226 ************************************ 00:04:26.226 18:06:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:26.226 18:06:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.226 18:06:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.226 18:06:38 -- common/autotest_common.sh@10 -- # set +x 00:04:26.226 ************************************ 00:04:26.226 START TEST rpc_client 00:04:26.226 ************************************ 00:04:26.226 18:06:38 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:26.226 * Looking for test storage... 00:04:26.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:26.226 18:06:38 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.226 18:06:38 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.226 18:06:38 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.226 18:06:38 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.226 18:06:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:26.485 18:06:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.486 18:06:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:26.486 18:06:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.486 18:06:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.486 18:06:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.486 18:06:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:26.486 OK 00:04:26.486 18:06:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:26.486 00:04:26.486 real 0m0.193s 00:04:26.486 user 0m0.113s 00:04:26.486 sys 0m0.088s 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.486 18:06:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:26.486 ************************************ 00:04:26.486 END TEST rpc_client 00:04:26.486 ************************************ 00:04:26.486 18:06:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:26.486 18:06:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.486 18:06:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.486 18:06:38 -- common/autotest_common.sh@10 -- # set +x 00:04:26.486 ************************************ 00:04:26.486 START TEST json_config 00:04:26.486 ************************************ 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.486 18:06:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.486 18:06:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.486 18:06:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.486 18:06:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.486 18:06:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.486 18:06:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:26.486 18:06:38 json_config -- scripts/common.sh@345 -- # : 1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.486 18:06:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.486 18:06:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@353 -- # local d=1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.486 18:06:38 json_config -- scripts/common.sh@355 -- # echo 1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.486 18:06:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@353 -- # local d=2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.486 18:06:38 json_config -- scripts/common.sh@355 -- # echo 2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.486 18:06:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.486 18:06:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.486 18:06:38 json_config -- scripts/common.sh@368 -- # return 0 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.486 --rc genhtml_branch_coverage=1 00:04:26.486 --rc genhtml_function_coverage=1 00:04:26.486 --rc genhtml_legend=1 00:04:26.486 --rc geninfo_all_blocks=1 00:04:26.486 --rc geninfo_unexecuted_blocks=1 00:04:26.486 00:04:26.486 ' 00:04:26.486 18:06:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.486 18:06:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.486 18:06:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.486 18:06:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.486 18:06:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.486 18:06:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.486 18:06:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.486 18:06:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.486 18:06:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.486 18:06:38 json_config -- paths/export.sh@5 -- # export PATH 00:04:26.487 18:06:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@51 -- # : 0 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.487 18:06:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:26.487 INFO: JSON configuration test init 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 18:06:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:26.487 18:06:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.487 18:06:38 json_config -- json_config/common.sh@10 -- # shift 00:04:26.487 18:06:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.487 18:06:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.487 18:06:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.487 18:06:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.487 18:06:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.487 18:06:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2327839 00:04:26.487 18:06:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:26.487 18:06:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.487 Waiting for target to run... 00:04:26.487 18:06:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2327839 /var/tmp/spdk_tgt.sock 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 2327839 ']' 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.487 18:06:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.745 [2024-11-28 18:06:38.671218] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:26.745 [2024-11-28 18:06:38.671376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327839 ] 00:04:27.003 [2024-11-28 18:06:39.089067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.261 [2024-11-28 18:06:39.211622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.519 18:06:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.519 18:06:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:27.519 18:06:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:27.519 00:04:27.519 18:06:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:27.519 18:06:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:27.519 18:06:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.519 18:06:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.519 18:06:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:27.519 18:06:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:27.519 18:06:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.519 18:06:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.778 18:06:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:27.778 18:06:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:27.778 18:06:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:31.961 18:06:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.961 18:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:31.961 18:06:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@54 -- # sort 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:31.961 18:06:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.961 18:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:31.961 18:06:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.961 18:06:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:31.961 18:06:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:31.961 18:06:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.219 MallocForNvmf0 00:04:32.219 18:06:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.219 18:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.476 MallocForNvmf1 00:04:32.476 18:06:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.476 18:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.734 [2024-11-28 18:06:44.643640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.734 18:06:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.734 18:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.991 18:06:44 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:32.991 18:06:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.248 18:06:45 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.248 18:06:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.505 18:06:45 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.505 18:06:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.812 [2024-11-28 18:06:45.715373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.812 18:06:45 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:33.812 18:06:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.812 18:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.812 18:06:45 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:33.812 18:06:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.812 18:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.812 18:06:45 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:33.812 18:06:45 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.812 18:06:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.069 MallocBdevForConfigChangeCheck 00:04:34.069 18:06:46 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:34.069 18:06:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.069 18:06:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.069 18:06:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:34.069 18:06:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.634 18:06:46 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:34.634 INFO: shutting down applications... 00:04:34.634 18:06:46 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:34.634 18:06:46 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:34.634 18:06:46 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:34.634 18:06:46 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:36.533 Calling clear_iscsi_subsystem 00:04:36.533 Calling clear_nvmf_subsystem 00:04:36.533 Calling clear_nbd_subsystem 00:04:36.533 Calling clear_ublk_subsystem 00:04:36.533 Calling clear_vhost_blk_subsystem 00:04:36.533 Calling clear_vhost_scsi_subsystem 00:04:36.533 Calling clear_bdev_subsystem 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@352 -- # break 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:36.533 18:06:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:36.533 18:06:48 json_config -- json_config/common.sh@31 -- # local app=target 00:04:36.533 18:06:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:36.533 18:06:48 json_config -- json_config/common.sh@35 -- # [[ -n 2327839 ]] 00:04:36.533 18:06:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2327839 00:04:36.533 18:06:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:36.533 18:06:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.533 18:06:48 json_config -- json_config/common.sh@41 -- # kill -0 2327839 00:04:36.533 18:06:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.099 18:06:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.099 18:06:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.099 18:06:49 json_config -- json_config/common.sh@41 -- # kill -0 2327839 00:04:37.099 18:06:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.666 18:06:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.666 18:06:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.666 18:06:49 json_config -- json_config/common.sh@41 -- # kill -0 2327839 00:04:37.666 18:06:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.666 18:06:49 json_config -- json_config/common.sh@43 -- # break 00:04:37.666 18:06:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.666 18:06:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.666 SPDK target shutdown done 00:04:37.666 18:06:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:37.666 INFO: relaunching applications... 00:04:37.666 18:06:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.666 18:06:49 json_config -- json_config/common.sh@9 -- # local app=target 00:04:37.666 18:06:49 json_config -- json_config/common.sh@10 -- # shift 00:04:37.666 18:06:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.666 18:06:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.666 18:06:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.666 18:06:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.666 18:06:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.666 18:06:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2329298 00:04:37.666 18:06:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.666 18:06:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.666 Waiting for target to run... 00:04:37.666 18:06:49 json_config -- json_config/common.sh@25 -- # waitforlisten 2329298 /var/tmp/spdk_tgt.sock 00:04:37.666 18:06:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 2329298 ']' 00:04:37.666 18:06:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.666 18:06:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.666 18:06:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.666 18:06:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.666 18:06:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.666 [2024-11-28 18:06:49.643195] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:37.666 [2024-11-28 18:06:49.643339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2329298 ] 00:04:38.234 [2024-11-28 18:06:50.236003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.500 [2024-11-28 18:06:50.367942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.710 [2024-11-28 18:06:54.174332] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.710 [2024-11-28 18:06:54.206919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.710 18:06:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.710 18:06:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:42.710 18:06:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.710 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:42.710 INFO: Checking if target configuration is the same... 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:42.710 18:06:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.710 + '[' 2 -ne 2 ']' 00:04:42.710 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.710 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.710 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.710 +++ basename /dev/fd/62 00:04:42.710 ++ mktemp /tmp/62.XXX 00:04:42.710 + tmp_file_1=/tmp/62.6eL 00:04:42.710 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.710 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.710 + tmp_file_2=/tmp/spdk_tgt_config.json.nRB 00:04:42.710 + ret=0 00:04:42.710 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.710 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:42.710 + diff -u /tmp/62.6eL /tmp/spdk_tgt_config.json.nRB 00:04:42.710 + echo 'INFO: JSON config files are the same' 00:04:42.710 INFO: JSON config files are the same 00:04:42.710 + rm /tmp/62.6eL /tmp/spdk_tgt_config.json.nRB 00:04:42.710 + exit 0 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:42.710 INFO: changing configuration and checking if this can be detected... 00:04:42.710 18:06:54 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.710 18:06:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:42.968 18:06:54 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.968 18:06:54 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:42.968 18:06:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.968 + '[' 2 -ne 2 ']' 00:04:42.968 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.968 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.968 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.968 +++ basename /dev/fd/62 00:04:42.968 ++ mktemp /tmp/62.XXX 00:04:42.968 + tmp_file_1=/tmp/62.40g 00:04:42.968 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.968 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.968 + tmp_file_2=/tmp/spdk_tgt_config.json.6XY 00:04:42.968 + ret=0 00:04:42.968 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.535 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.535 + diff -u /tmp/62.40g /tmp/spdk_tgt_config.json.6XY 00:04:43.535 + ret=1 00:04:43.535 + echo '=== Start of file: /tmp/62.40g ===' 00:04:43.535 + cat /tmp/62.40g 00:04:43.535 + echo '=== End of file: /tmp/62.40g ===' 00:04:43.535 + echo '' 00:04:43.535 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6XY ===' 00:04:43.535 + cat /tmp/spdk_tgt_config.json.6XY 00:04:43.535 + echo '=== End of file: /tmp/spdk_tgt_config.json.6XY ===' 00:04:43.535 + echo '' 00:04:43.535 + rm /tmp/62.40g /tmp/spdk_tgt_config.json.6XY 00:04:43.535 + exit 1 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:43.535 INFO: configuration change detected. 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@324 -- # [[ -n 2329298 ]] 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.535 18:06:55 json_config -- json_config/json_config.sh@330 -- # killprocess 2329298 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@954 -- # '[' -z 2329298 ']' 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@958 -- # kill -0 2329298 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@959 -- # uname 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2329298 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2329298' 00:04:43.535 killing process with pid 2329298 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@973 -- # kill 2329298 00:04:43.535 18:06:55 json_config -- common/autotest_common.sh@978 -- # wait 2329298 00:04:46.086 18:06:57 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.086 18:06:57 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:46.086 18:06:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.086 18:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.086 18:06:57 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:46.086 18:06:57 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:46.086 INFO: Success 00:04:46.086 00:04:46.086 real 0m19.530s 00:04:46.086 user 0m21.218s 00:04:46.086 sys 0m3.050s 00:04:46.086 18:06:57 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.086 18:06:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.086 ************************************ 00:04:46.086 END TEST json_config 00:04:46.086 ************************************ 00:04:46.086 18:06:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.086 18:06:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.086 18:06:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.086 18:06:57 -- common/autotest_common.sh@10 -- # set +x 00:04:46.086 ************************************ 00:04:46.086 START TEST json_config_extra_key 00:04:46.086 ************************************ 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.086 18:06:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.086 --rc genhtml_branch_coverage=1 00:04:46.086 --rc genhtml_function_coverage=1 00:04:46.086 --rc genhtml_legend=1 00:04:46.086 --rc geninfo_all_blocks=1 00:04:46.086 --rc geninfo_unexecuted_blocks=1 00:04:46.086 00:04:46.086 ' 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.086 --rc genhtml_branch_coverage=1 00:04:46.086 --rc genhtml_function_coverage=1 00:04:46.086 --rc genhtml_legend=1 00:04:46.086 --rc geninfo_all_blocks=1 00:04:46.086 --rc geninfo_unexecuted_blocks=1 00:04:46.086 00:04:46.086 ' 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.086 --rc genhtml_branch_coverage=1 00:04:46.086 --rc genhtml_function_coverage=1 00:04:46.086 --rc genhtml_legend=1 00:04:46.086 --rc geninfo_all_blocks=1 00:04:46.086 --rc geninfo_unexecuted_blocks=1 00:04:46.086 00:04:46.086 ' 00:04:46.086 18:06:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.086 --rc genhtml_branch_coverage=1 00:04:46.086 --rc genhtml_function_coverage=1 00:04:46.086 --rc genhtml_legend=1 00:04:46.086 --rc geninfo_all_blocks=1 00:04:46.086 --rc geninfo_unexecuted_blocks=1 00:04:46.086 00:04:46.086 ' 00:04:46.086 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.086 18:06:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:46.087 18:06:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.087 18:06:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.087 18:06:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.087 18:06:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.087 18:06:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.087 18:06:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.087 18:06:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.087 18:06:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.087 18:06:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.087 18:06:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.087 INFO: launching applications... 00:04:46.087 18:06:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2330479 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.087 Waiting for target to run... 00:04:46.087 18:06:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2330479 /var/tmp/spdk_tgt.sock 00:04:46.087 18:06:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2330479 ']' 00:04:46.087 18:06:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.087 18:06:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.087 18:06:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.087 18:06:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.087 18:06:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.345 [2024-11-28 18:06:58.235831] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:46.345 [2024-11-28 18:06:58.235977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330479 ] 00:04:46.913 [2024-11-28 18:06:58.834342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.913 [2024-11-28 18:06:58.964115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.849 18:06:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.849 18:06:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.849 00:04:47.849 18:06:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.849 INFO: shutting down applications... 00:04:47.849 18:06:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2330479 ]] 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2330479 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:47.849 18:06:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.417 18:07:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.417 18:07:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.417 18:07:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:48.417 18:07:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.676 18:07:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.676 18:07:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.676 18:07:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:48.676 18:07:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.243 18:07:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.243 18:07:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.243 18:07:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:49.243 18:07:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.809 18:07:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.809 18:07:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.809 18:07:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:49.809 18:07:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.376 18:07:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.376 18:07:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.376 18:07:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:50.376 18:07:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2330479 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.944 18:07:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.944 SPDK target shutdown done 00:04:50.944 18:07:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:50.944 Success 00:04:50.944 00:04:50.944 real 0m4.751s 00:04:50.944 user 0m4.256s 00:04:50.944 sys 0m0.840s 00:04:50.944 18:07:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.944 18:07:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:50.944 ************************************ 00:04:50.944 END TEST json_config_extra_key 00:04:50.944 ************************************ 00:04:50.944 18:07:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.944 18:07:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.944 18:07:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.944 18:07:02 -- common/autotest_common.sh@10 -- # set +x 00:04:50.944 ************************************ 00:04:50.944 START TEST alias_rpc 00:04:50.944 ************************************ 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.944 * Looking for test storage... 00:04:50.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.944 18:07:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.944 --rc genhtml_branch_coverage=1 00:04:50.944 --rc genhtml_function_coverage=1 00:04:50.944 --rc genhtml_legend=1 00:04:50.944 --rc geninfo_all_blocks=1 00:04:50.944 --rc geninfo_unexecuted_blocks=1 00:04:50.944 00:04:50.944 ' 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.944 --rc genhtml_branch_coverage=1 00:04:50.944 --rc genhtml_function_coverage=1 00:04:50.944 --rc genhtml_legend=1 00:04:50.944 --rc geninfo_all_blocks=1 00:04:50.944 --rc geninfo_unexecuted_blocks=1 00:04:50.944 00:04:50.944 ' 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.944 --rc genhtml_branch_coverage=1 00:04:50.944 --rc genhtml_function_coverage=1 00:04:50.944 --rc genhtml_legend=1 00:04:50.944 --rc geninfo_all_blocks=1 00:04:50.944 --rc geninfo_unexecuted_blocks=1 00:04:50.944 00:04:50.944 ' 00:04:50.944 18:07:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.944 --rc genhtml_branch_coverage=1 00:04:50.944 --rc genhtml_function_coverage=1 00:04:50.944 --rc genhtml_legend=1 00:04:50.944 --rc geninfo_all_blocks=1 00:04:50.944 --rc geninfo_unexecuted_blocks=1 00:04:50.944 00:04:50.944 ' 00:04:50.944 18:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.945 18:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2331079 00:04:50.945 18:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.945 18:07:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2331079 00:04:50.945 18:07:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2331079 ']' 00:04:50.945 18:07:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.945 18:07:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.945 18:07:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.945 18:07:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.945 18:07:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.203 [2024-11-28 18:07:03.045901] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:51.203 [2024-11-28 18:07:03.046060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331079 ] 00:04:51.203 [2024-11-28 18:07:03.190233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.461 [2024-11-28 18:07:03.329149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.396 18:07:04 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.396 18:07:04 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.396 18:07:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:52.655 18:07:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2331079 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2331079 ']' 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2331079 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331079 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331079' 00:04:52.655 killing process with pid 2331079 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 2331079 00:04:52.655 18:07:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 2331079 00:04:55.185 00:04:55.185 real 0m4.278s 00:04:55.185 user 0m4.410s 00:04:55.185 sys 0m0.649s 00:04:55.185 18:07:07 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.185 18:07:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.185 ************************************ 00:04:55.185 END TEST alias_rpc 00:04:55.185 ************************************ 00:04:55.185 18:07:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:55.185 18:07:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.185 18:07:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.185 18:07:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.185 18:07:07 -- common/autotest_common.sh@10 -- # set +x 00:04:55.185 ************************************ 00:04:55.185 START TEST spdkcli_tcp 00:04:55.185 ************************************ 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:55.185 * Looking for test storage... 00:04:55.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.185 18:07:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.185 --rc genhtml_branch_coverage=1 00:04:55.185 --rc genhtml_function_coverage=1 00:04:55.185 --rc genhtml_legend=1 00:04:55.185 --rc geninfo_all_blocks=1 00:04:55.185 --rc geninfo_unexecuted_blocks=1 00:04:55.185 00:04:55.185 ' 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.185 --rc genhtml_branch_coverage=1 00:04:55.185 --rc genhtml_function_coverage=1 00:04:55.185 --rc genhtml_legend=1 00:04:55.185 --rc geninfo_all_blocks=1 00:04:55.185 --rc geninfo_unexecuted_blocks=1 00:04:55.185 00:04:55.185 ' 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.185 --rc genhtml_branch_coverage=1 00:04:55.185 --rc genhtml_function_coverage=1 00:04:55.185 --rc genhtml_legend=1 00:04:55.185 --rc geninfo_all_blocks=1 00:04:55.185 --rc geninfo_unexecuted_blocks=1 00:04:55.185 00:04:55.185 ' 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:55.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.185 --rc genhtml_branch_coverage=1 00:04:55.185 --rc genhtml_function_coverage=1 00:04:55.185 --rc genhtml_legend=1 00:04:55.185 --rc geninfo_all_blocks=1 00:04:55.185 --rc geninfo_unexecuted_blocks=1 00:04:55.185 00:04:55.185 ' 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2331674 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.185 18:07:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2331674 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2331674 ']' 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.185 18:07:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.444 [2024-11-28 18:07:07.366241] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:55.444 [2024-11-28 18:07:07.366394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2331674 ] 00:04:55.444 [2024-11-28 18:07:07.508456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.702 [2024-11-28 18:07:07.647175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.702 [2024-11-28 18:07:07.647178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.637 18:07:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.637 18:07:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:56.637 18:07:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2331813 00:04:56.637 18:07:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.637 18:07:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.896 [ 00:04:56.896 "bdev_malloc_delete", 00:04:56.896 "bdev_malloc_create", 00:04:56.896 "bdev_null_resize", 00:04:56.896 "bdev_null_delete", 00:04:56.896 "bdev_null_create", 00:04:56.896 "bdev_nvme_cuse_unregister", 00:04:56.896 "bdev_nvme_cuse_register", 00:04:56.896 "bdev_opal_new_user", 00:04:56.896 "bdev_opal_set_lock_state", 00:04:56.896 "bdev_opal_delete", 00:04:56.896 "bdev_opal_get_info", 00:04:56.896 "bdev_opal_create", 00:04:56.896 "bdev_nvme_opal_revert", 00:04:56.896 "bdev_nvme_opal_init", 00:04:56.896 "bdev_nvme_send_cmd", 00:04:56.896 "bdev_nvme_set_keys", 00:04:56.896 "bdev_nvme_get_path_iostat", 00:04:56.896 "bdev_nvme_get_mdns_discovery_info", 00:04:56.896 "bdev_nvme_stop_mdns_discovery", 00:04:56.896 "bdev_nvme_start_mdns_discovery", 00:04:56.896 "bdev_nvme_set_multipath_policy", 00:04:56.896 "bdev_nvme_set_preferred_path", 00:04:56.896 "bdev_nvme_get_io_paths", 00:04:56.896 "bdev_nvme_remove_error_injection", 00:04:56.896 "bdev_nvme_add_error_injection", 00:04:56.896 "bdev_nvme_get_discovery_info", 00:04:56.896 "bdev_nvme_stop_discovery", 00:04:56.896 "bdev_nvme_start_discovery", 00:04:56.896 "bdev_nvme_get_controller_health_info", 00:04:56.896 "bdev_nvme_disable_controller", 00:04:56.896 "bdev_nvme_enable_controller", 00:04:56.896 "bdev_nvme_reset_controller", 00:04:56.896 "bdev_nvme_get_transport_statistics", 00:04:56.896 "bdev_nvme_apply_firmware", 00:04:56.896 "bdev_nvme_detach_controller", 00:04:56.896 "bdev_nvme_get_controllers", 00:04:56.896 "bdev_nvme_attach_controller", 00:04:56.896 "bdev_nvme_set_hotplug", 00:04:56.896 "bdev_nvme_set_options", 00:04:56.896 "bdev_passthru_delete", 00:04:56.896 "bdev_passthru_create", 00:04:56.896 "bdev_lvol_set_parent_bdev", 00:04:56.896 "bdev_lvol_set_parent", 00:04:56.896 "bdev_lvol_check_shallow_copy", 00:04:56.896 "bdev_lvol_start_shallow_copy", 00:04:56.896 "bdev_lvol_grow_lvstore", 00:04:56.896 "bdev_lvol_get_lvols", 00:04:56.896 "bdev_lvol_get_lvstores", 00:04:56.896 "bdev_lvol_delete", 00:04:56.896 "bdev_lvol_set_read_only", 00:04:56.896 "bdev_lvol_resize", 00:04:56.896 "bdev_lvol_decouple_parent", 00:04:56.896 "bdev_lvol_inflate", 00:04:56.896 "bdev_lvol_rename", 00:04:56.896 "bdev_lvol_clone_bdev", 00:04:56.896 "bdev_lvol_clone", 00:04:56.896 "bdev_lvol_snapshot", 00:04:56.896 "bdev_lvol_create", 00:04:56.896 "bdev_lvol_delete_lvstore", 00:04:56.896 "bdev_lvol_rename_lvstore", 00:04:56.896 "bdev_lvol_create_lvstore", 00:04:56.896 "bdev_raid_set_options", 00:04:56.896 "bdev_raid_remove_base_bdev", 00:04:56.896 "bdev_raid_add_base_bdev", 00:04:56.896 "bdev_raid_delete", 00:04:56.896 "bdev_raid_create", 00:04:56.896 "bdev_raid_get_bdevs", 00:04:56.896 "bdev_error_inject_error", 00:04:56.896 "bdev_error_delete", 00:04:56.896 "bdev_error_create", 00:04:56.896 "bdev_split_delete", 00:04:56.896 "bdev_split_create", 00:04:56.896 "bdev_delay_delete", 00:04:56.896 "bdev_delay_create", 00:04:56.896 "bdev_delay_update_latency", 00:04:56.896 "bdev_zone_block_delete", 00:04:56.896 "bdev_zone_block_create", 00:04:56.896 "blobfs_create", 00:04:56.896 "blobfs_detect", 00:04:56.896 "blobfs_set_cache_size", 00:04:56.896 "bdev_aio_delete", 00:04:56.896 "bdev_aio_rescan", 00:04:56.896 "bdev_aio_create", 00:04:56.896 "bdev_ftl_set_property", 00:04:56.896 "bdev_ftl_get_properties", 00:04:56.896 "bdev_ftl_get_stats", 00:04:56.896 "bdev_ftl_unmap", 00:04:56.896 "bdev_ftl_unload", 00:04:56.896 "bdev_ftl_delete", 00:04:56.896 "bdev_ftl_load", 00:04:56.896 "bdev_ftl_create", 00:04:56.896 "bdev_virtio_attach_controller", 00:04:56.896 "bdev_virtio_scsi_get_devices", 00:04:56.896 "bdev_virtio_detach_controller", 00:04:56.896 "bdev_virtio_blk_set_hotplug", 00:04:56.896 "bdev_iscsi_delete", 00:04:56.896 "bdev_iscsi_create", 00:04:56.896 "bdev_iscsi_set_options", 00:04:56.896 "accel_error_inject_error", 00:04:56.896 "ioat_scan_accel_module", 00:04:56.896 "dsa_scan_accel_module", 00:04:56.896 "iaa_scan_accel_module", 00:04:56.896 "keyring_file_remove_key", 00:04:56.896 "keyring_file_add_key", 00:04:56.896 "keyring_linux_set_options", 00:04:56.896 "fsdev_aio_delete", 00:04:56.896 "fsdev_aio_create", 00:04:56.896 "iscsi_get_histogram", 00:04:56.896 "iscsi_enable_histogram", 00:04:56.896 "iscsi_set_options", 00:04:56.896 "iscsi_get_auth_groups", 00:04:56.896 "iscsi_auth_group_remove_secret", 00:04:56.896 "iscsi_auth_group_add_secret", 00:04:56.896 "iscsi_delete_auth_group", 00:04:56.896 "iscsi_create_auth_group", 00:04:56.896 "iscsi_set_discovery_auth", 00:04:56.896 "iscsi_get_options", 00:04:56.896 "iscsi_target_node_request_logout", 00:04:56.896 "iscsi_target_node_set_redirect", 00:04:56.897 "iscsi_target_node_set_auth", 00:04:56.897 "iscsi_target_node_add_lun", 00:04:56.897 "iscsi_get_stats", 00:04:56.897 "iscsi_get_connections", 00:04:56.897 "iscsi_portal_group_set_auth", 00:04:56.897 "iscsi_start_portal_group", 00:04:56.897 "iscsi_delete_portal_group", 00:04:56.897 "iscsi_create_portal_group", 00:04:56.897 "iscsi_get_portal_groups", 00:04:56.897 "iscsi_delete_target_node", 00:04:56.897 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.897 "iscsi_target_node_add_pg_ig_maps", 00:04:56.897 "iscsi_create_target_node", 00:04:56.897 "iscsi_get_target_nodes", 00:04:56.897 "iscsi_delete_initiator_group", 00:04:56.897 "iscsi_initiator_group_remove_initiators", 00:04:56.897 "iscsi_initiator_group_add_initiators", 00:04:56.897 "iscsi_create_initiator_group", 00:04:56.897 "iscsi_get_initiator_groups", 00:04:56.897 "nvmf_set_crdt", 00:04:56.897 "nvmf_set_config", 00:04:56.897 "nvmf_set_max_subsystems", 00:04:56.897 "nvmf_stop_mdns_prr", 00:04:56.897 "nvmf_publish_mdns_prr", 00:04:56.897 "nvmf_subsystem_get_listeners", 00:04:56.897 "nvmf_subsystem_get_qpairs", 00:04:56.897 "nvmf_subsystem_get_controllers", 00:04:56.897 "nvmf_get_stats", 00:04:56.897 "nvmf_get_transports", 00:04:56.897 "nvmf_create_transport", 00:04:56.897 "nvmf_get_targets", 00:04:56.897 "nvmf_delete_target", 00:04:56.897 "nvmf_create_target", 00:04:56.897 "nvmf_subsystem_allow_any_host", 00:04:56.897 "nvmf_subsystem_set_keys", 00:04:56.897 "nvmf_subsystem_remove_host", 00:04:56.897 "nvmf_subsystem_add_host", 00:04:56.897 "nvmf_ns_remove_host", 00:04:56.897 "nvmf_ns_add_host", 00:04:56.897 "nvmf_subsystem_remove_ns", 00:04:56.897 "nvmf_subsystem_set_ns_ana_group", 00:04:56.897 "nvmf_subsystem_add_ns", 00:04:56.897 "nvmf_subsystem_listener_set_ana_state", 00:04:56.897 "nvmf_discovery_get_referrals", 00:04:56.897 "nvmf_discovery_remove_referral", 00:04:56.897 "nvmf_discovery_add_referral", 00:04:56.897 "nvmf_subsystem_remove_listener", 00:04:56.897 "nvmf_subsystem_add_listener", 00:04:56.897 "nvmf_delete_subsystem", 00:04:56.897 "nvmf_create_subsystem", 00:04:56.897 "nvmf_get_subsystems", 00:04:56.897 "env_dpdk_get_mem_stats", 00:04:56.897 "nbd_get_disks", 00:04:56.897 "nbd_stop_disk", 00:04:56.897 "nbd_start_disk", 00:04:56.897 "ublk_recover_disk", 00:04:56.897 "ublk_get_disks", 00:04:56.897 "ublk_stop_disk", 00:04:56.897 "ublk_start_disk", 00:04:56.897 "ublk_destroy_target", 00:04:56.897 "ublk_create_target", 00:04:56.897 "virtio_blk_create_transport", 00:04:56.897 "virtio_blk_get_transports", 00:04:56.897 "vhost_controller_set_coalescing", 00:04:56.897 "vhost_get_controllers", 00:04:56.897 "vhost_delete_controller", 00:04:56.897 "vhost_create_blk_controller", 00:04:56.897 "vhost_scsi_controller_remove_target", 00:04:56.897 "vhost_scsi_controller_add_target", 00:04:56.897 "vhost_start_scsi_controller", 00:04:56.897 "vhost_create_scsi_controller", 00:04:56.897 "thread_set_cpumask", 00:04:56.897 "scheduler_set_options", 00:04:56.897 "framework_get_governor", 00:04:56.897 "framework_get_scheduler", 00:04:56.897 "framework_set_scheduler", 00:04:56.897 "framework_get_reactors", 00:04:56.897 "thread_get_io_channels", 00:04:56.897 "thread_get_pollers", 00:04:56.897 "thread_get_stats", 00:04:56.897 "framework_monitor_context_switch", 00:04:56.897 "spdk_kill_instance", 00:04:56.897 "log_enable_timestamps", 00:04:56.897 "log_get_flags", 00:04:56.897 "log_clear_flag", 00:04:56.897 "log_set_flag", 00:04:56.897 "log_get_level", 00:04:56.897 "log_set_level", 00:04:56.897 "log_get_print_level", 00:04:56.897 "log_set_print_level", 00:04:56.897 "framework_enable_cpumask_locks", 00:04:56.897 "framework_disable_cpumask_locks", 00:04:56.897 "framework_wait_init", 00:04:56.897 "framework_start_init", 00:04:56.897 "scsi_get_devices", 00:04:56.897 "bdev_get_histogram", 00:04:56.897 "bdev_enable_histogram", 00:04:56.897 "bdev_set_qos_limit", 00:04:56.897 "bdev_set_qd_sampling_period", 00:04:56.897 "bdev_get_bdevs", 00:04:56.897 "bdev_reset_iostat", 00:04:56.897 "bdev_get_iostat", 00:04:56.897 "bdev_examine", 00:04:56.897 "bdev_wait_for_examine", 00:04:56.897 "bdev_set_options", 00:04:56.897 "accel_get_stats", 00:04:56.897 "accel_set_options", 00:04:56.897 "accel_set_driver", 00:04:56.897 "accel_crypto_key_destroy", 00:04:56.897 "accel_crypto_keys_get", 00:04:56.897 "accel_crypto_key_create", 00:04:56.897 "accel_assign_opc", 00:04:56.897 "accel_get_module_info", 00:04:56.897 "accel_get_opc_assignments", 00:04:56.897 "vmd_rescan", 00:04:56.897 "vmd_remove_device", 00:04:56.897 "vmd_enable", 00:04:56.897 "sock_get_default_impl", 00:04:56.897 "sock_set_default_impl", 00:04:56.897 "sock_impl_set_options", 00:04:56.897 "sock_impl_get_options", 00:04:56.897 "iobuf_get_stats", 00:04:56.897 "iobuf_set_options", 00:04:56.897 "keyring_get_keys", 00:04:56.897 "framework_get_pci_devices", 00:04:56.897 "framework_get_config", 00:04:56.897 "framework_get_subsystems", 00:04:56.897 "fsdev_set_opts", 00:04:56.897 "fsdev_get_opts", 00:04:56.897 "trace_get_info", 00:04:56.897 "trace_get_tpoint_group_mask", 00:04:56.897 "trace_disable_tpoint_group", 00:04:56.897 "trace_enable_tpoint_group", 00:04:56.897 "trace_clear_tpoint_mask", 00:04:56.897 "trace_set_tpoint_mask", 00:04:56.897 "notify_get_notifications", 00:04:56.897 "notify_get_types", 00:04:56.897 "spdk_get_version", 00:04:56.897 "rpc_get_methods" 00:04:56.897 ] 00:04:56.897 18:07:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.897 18:07:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.897 18:07:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2331674 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2331674 ']' 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2331674 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331674 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331674' 00:04:56.897 killing process with pid 2331674 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2331674 00:04:56.897 18:07:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2331674 00:04:59.429 00:04:59.429 real 0m4.194s 00:04:59.429 user 0m7.739s 00:04:59.429 sys 0m0.655s 00:04:59.429 18:07:11 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.429 18:07:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.429 ************************************ 00:04:59.429 END TEST spdkcli_tcp 00:04:59.429 ************************************ 00:04:59.429 18:07:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.429 18:07:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.429 18:07:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.429 18:07:11 -- common/autotest_common.sh@10 -- # set +x 00:04:59.429 ************************************ 00:04:59.429 START TEST dpdk_mem_utility 00:04:59.429 ************************************ 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.429 * Looking for test storage... 00:04:59.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.429 18:07:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.429 --rc genhtml_branch_coverage=1 00:04:59.429 --rc genhtml_function_coverage=1 00:04:59.429 --rc genhtml_legend=1 00:04:59.429 --rc geninfo_all_blocks=1 00:04:59.429 --rc geninfo_unexecuted_blocks=1 00:04:59.429 00:04:59.429 ' 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.429 --rc genhtml_branch_coverage=1 00:04:59.429 --rc genhtml_function_coverage=1 00:04:59.429 --rc genhtml_legend=1 00:04:59.429 --rc geninfo_all_blocks=1 00:04:59.429 --rc geninfo_unexecuted_blocks=1 00:04:59.429 00:04:59.429 ' 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.429 --rc genhtml_branch_coverage=1 00:04:59.429 --rc genhtml_function_coverage=1 00:04:59.429 --rc genhtml_legend=1 00:04:59.429 --rc geninfo_all_blocks=1 00:04:59.429 --rc geninfo_unexecuted_blocks=1 00:04:59.429 00:04:59.429 ' 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.429 --rc genhtml_branch_coverage=1 00:04:59.429 --rc genhtml_function_coverage=1 00:04:59.429 --rc genhtml_legend=1 00:04:59.429 --rc geninfo_all_blocks=1 00:04:59.429 --rc geninfo_unexecuted_blocks=1 00:04:59.429 00:04:59.429 ' 00:04:59.429 18:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:59.429 18:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2332275 00:04:59.429 18:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.429 18:07:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2332275 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2332275 ']' 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.429 18:07:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.687 [2024-11-28 18:07:11.616586] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:04:59.687 [2024-11-28 18:07:11.616733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332275 ] 00:04:59.687 [2024-11-28 18:07:11.759350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.944 [2024-11-28 18:07:11.896708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.880 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.880 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:00.880 18:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.880 18:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.880 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.880 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.880 { 00:05:00.880 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.880 } 00:05:00.880 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.880 18:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:00.880 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:00.880 1 heaps totaling size 824.000000 MiB 00:05:00.880 size: 824.000000 MiB heap id: 0 00:05:00.880 end heaps---------- 00:05:00.880 9 mempools totaling size 603.782043 MiB 00:05:00.880 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.880 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.880 size: 100.555481 MiB name: bdev_io_2332275 00:05:00.880 size: 50.003479 MiB name: msgpool_2332275 00:05:00.880 size: 36.509338 MiB name: fsdev_io_2332275 00:05:00.880 size: 21.763794 MiB name: PDU_Pool 00:05:00.880 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.880 size: 4.133484 MiB name: evtpool_2332275 00:05:00.880 size: 0.026123 MiB name: Session_Pool 00:05:00.880 end mempools------- 00:05:00.880 6 memzones totaling size 4.142822 MiB 00:05:00.880 size: 1.000366 MiB name: RG_ring_0_2332275 00:05:00.880 size: 1.000366 MiB name: RG_ring_1_2332275 00:05:00.880 size: 1.000366 MiB name: RG_ring_4_2332275 00:05:00.880 size: 1.000366 MiB name: RG_ring_5_2332275 00:05:00.880 size: 0.125366 MiB name: RG_ring_2_2332275 00:05:00.880 size: 0.015991 MiB name: RG_ring_3_2332275 00:05:00.880 end memzones------- 00:05:00.880 18:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.880 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:00.880 list of free elements. size: 16.847595 MiB 00:05:00.880 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:00.880 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:00.880 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:00.880 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:00.880 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:00.880 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:00.880 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:00.880 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:00.880 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:00.880 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:00.880 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:00.880 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:00.880 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:00.880 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:00.880 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:00.880 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:00.880 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:00.880 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:00.880 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:00.880 list of standard malloc elements. size: 199.221497 MiB 00:05:00.880 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:00.880 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:00.880 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:00.880 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:00.880 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:00.880 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:00.880 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:00.880 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:00.880 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:00.880 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:00.880 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:00.880 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:00.880 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:00.880 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:00.880 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:00.880 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:00.880 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:00.880 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:00.881 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:00.881 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:00.881 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:00.881 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:00.881 list of memzone associated elements. size: 607.930908 MiB 00:05:00.881 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:00.881 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.881 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:00.881 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.881 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:00.881 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2332275_0 00:05:00.881 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:00.881 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2332275_0 00:05:00.881 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:00.881 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2332275_0 00:05:00.881 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:00.881 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.881 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:00.881 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.881 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:00.881 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2332275_0 00:05:00.881 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:00.881 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2332275 00:05:00.881 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:00.881 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2332275 00:05:00.881 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:00.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.881 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:00.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.881 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:00.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.881 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:00.881 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.881 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:00.881 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2332275 00:05:00.881 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:00.881 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2332275 00:05:00.881 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:00.881 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2332275 00:05:00.881 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:00.881 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2332275 00:05:00.881 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:00.881 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2332275 00:05:00.881 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:00.881 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2332275 00:05:00.881 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:00.881 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.881 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:00.881 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.881 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:00.881 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.881 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:00.881 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2332275 00:05:00.881 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:00.881 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2332275 00:05:00.881 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:00.881 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.881 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:00.881 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.881 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:00.881 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2332275 00:05:00.881 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:00.881 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.881 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:00.881 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2332275 00:05:00.881 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:00.881 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2332275 00:05:00.881 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:00.881 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2332275 00:05:00.881 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:00.881 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.881 18:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.881 18:07:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2332275 00:05:00.881 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2332275 ']' 00:05:00.881 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2332275 00:05:00.881 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:00.881 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.881 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2332275 00:05:01.139 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.139 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.139 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2332275' 00:05:01.139 killing process with pid 2332275 00:05:01.139 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2332275 00:05:01.139 18:07:12 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2332275 00:05:03.668 00:05:03.668 real 0m4.029s 00:05:03.668 user 0m4.045s 00:05:03.668 sys 0m0.636s 00:05:03.668 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.668 18:07:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.668 ************************************ 00:05:03.668 END TEST dpdk_mem_utility 00:05:03.668 ************************************ 00:05:03.668 18:07:15 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:03.668 18:07:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.668 18:07:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.668 18:07:15 -- common/autotest_common.sh@10 -- # set +x 00:05:03.668 ************************************ 00:05:03.668 START TEST event 00:05:03.668 ************************************ 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:03.668 * Looking for test storage... 00:05:03.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.668 18:07:15 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.668 18:07:15 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.668 18:07:15 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.668 18:07:15 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.668 18:07:15 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.668 18:07:15 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.668 18:07:15 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.668 18:07:15 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.668 18:07:15 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.668 18:07:15 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.668 18:07:15 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.668 18:07:15 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.668 18:07:15 event -- scripts/common.sh@345 -- # : 1 00:05:03.668 18:07:15 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.668 18:07:15 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.668 18:07:15 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.668 18:07:15 event -- scripts/common.sh@353 -- # local d=1 00:05:03.668 18:07:15 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.668 18:07:15 event -- scripts/common.sh@355 -- # echo 1 00:05:03.668 18:07:15 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.668 18:07:15 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.668 18:07:15 event -- scripts/common.sh@353 -- # local d=2 00:05:03.668 18:07:15 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.668 18:07:15 event -- scripts/common.sh@355 -- # echo 2 00:05:03.668 18:07:15 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.668 18:07:15 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.668 18:07:15 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.668 18:07:15 event -- scripts/common.sh@368 -- # return 0 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.668 18:07:15 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.669 --rc genhtml_branch_coverage=1 00:05:03.669 --rc genhtml_function_coverage=1 00:05:03.669 --rc genhtml_legend=1 00:05:03.669 --rc geninfo_all_blocks=1 00:05:03.669 --rc geninfo_unexecuted_blocks=1 00:05:03.669 00:05:03.669 ' 00:05:03.669 18:07:15 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.669 --rc genhtml_branch_coverage=1 00:05:03.669 --rc genhtml_function_coverage=1 00:05:03.669 --rc genhtml_legend=1 00:05:03.669 --rc geninfo_all_blocks=1 00:05:03.669 --rc geninfo_unexecuted_blocks=1 00:05:03.669 00:05:03.669 ' 00:05:03.669 18:07:15 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.669 --rc genhtml_branch_coverage=1 00:05:03.669 --rc genhtml_function_coverage=1 00:05:03.669 --rc genhtml_legend=1 00:05:03.669 --rc geninfo_all_blocks=1 00:05:03.669 --rc geninfo_unexecuted_blocks=1 00:05:03.669 00:05:03.669 ' 00:05:03.669 18:07:15 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.669 --rc genhtml_branch_coverage=1 00:05:03.669 --rc genhtml_function_coverage=1 00:05:03.669 --rc genhtml_legend=1 00:05:03.669 --rc geninfo_all_blocks=1 00:05:03.669 --rc geninfo_unexecuted_blocks=1 00:05:03.669 00:05:03.669 ' 00:05:03.669 18:07:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:03.669 18:07:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.669 18:07:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.669 18:07:15 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.669 18:07:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.669 18:07:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.669 ************************************ 00:05:03.669 START TEST event_perf 00:05:03.669 ************************************ 00:05:03.669 18:07:15 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.669 Running I/O for 1 seconds...[2024-11-28 18:07:15.668216] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:03.669 [2024-11-28 18:07:15.668341] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332758 ] 00:05:03.927 [2024-11-28 18:07:15.809496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.927 [2024-11-28 18:07:15.960028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.927 [2024-11-28 18:07:15.960106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.927 [2024-11-28 18:07:15.960179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.927 [2024-11-28 18:07:15.960201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.301 Running I/O for 1 seconds... 00:05:05.301 lcore 0: 222630 00:05:05.301 lcore 1: 222630 00:05:05.301 lcore 2: 222631 00:05:05.301 lcore 3: 222631 00:05:05.301 done. 00:05:05.301 00:05:05.301 real 0m1.599s 00:05:05.301 user 0m4.431s 00:05:05.301 sys 0m0.153s 00:05:05.301 18:07:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.301 18:07:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.301 ************************************ 00:05:05.301 END TEST event_perf 00:05:05.301 ************************************ 00:05:05.301 18:07:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.301 18:07:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.301 18:07:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.301 18:07:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.301 ************************************ 00:05:05.301 START TEST event_reactor 00:05:05.301 ************************************ 00:05:05.301 18:07:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.301 [2024-11-28 18:07:17.314247] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:05.301 [2024-11-28 18:07:17.314371] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333040 ] 00:05:05.560 [2024-11-28 18:07:17.454170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.560 [2024-11-28 18:07:17.592208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.933 test_start 00:05:06.933 oneshot 00:05:06.933 tick 100 00:05:06.933 tick 100 00:05:06.933 tick 250 00:05:06.933 tick 100 00:05:06.933 tick 100 00:05:06.933 tick 100 00:05:06.933 tick 250 00:05:06.933 tick 500 00:05:06.933 tick 100 00:05:06.933 tick 100 00:05:06.933 tick 250 00:05:06.933 tick 100 00:05:06.933 tick 100 00:05:06.933 test_end 00:05:06.933 00:05:06.933 real 0m1.569s 00:05:06.933 user 0m1.421s 00:05:06.933 sys 0m0.140s 00:05:06.933 18:07:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.933 18:07:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.933 ************************************ 00:05:06.933 END TEST event_reactor 00:05:06.933 ************************************ 00:05:06.933 18:07:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.933 18:07:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.933 18:07:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.933 18:07:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.933 ************************************ 00:05:06.933 START TEST event_reactor_perf 00:05:06.933 ************************************ 00:05:06.933 18:07:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.933 [2024-11-28 18:07:18.927635] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:06.933 [2024-11-28 18:07:18.927745] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333197 ] 00:05:07.192 [2024-11-28 18:07:19.069692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.192 [2024-11-28 18:07:19.207575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.566 test_start 00:05:08.566 test_end 00:05:08.566 Performance: 266444 events per second 00:05:08.566 00:05:08.566 real 0m1.568s 00:05:08.566 user 0m1.427s 00:05:08.566 sys 0m0.131s 00:05:08.566 18:07:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.566 18:07:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.566 ************************************ 00:05:08.566 END TEST event_reactor_perf 00:05:08.566 ************************************ 00:05:08.566 18:07:20 event -- event/event.sh@49 -- # uname -s 00:05:08.566 18:07:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.566 18:07:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.566 18:07:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.566 18:07:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.566 18:07:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.566 ************************************ 00:05:08.566 START TEST event_scheduler 00:05:08.566 ************************************ 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.566 * Looking for test storage... 00:05:08.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.566 18:07:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.566 --rc genhtml_branch_coverage=1 00:05:08.566 --rc genhtml_function_coverage=1 00:05:08.566 --rc genhtml_legend=1 00:05:08.566 --rc geninfo_all_blocks=1 00:05:08.566 --rc geninfo_unexecuted_blocks=1 00:05:08.566 00:05:08.566 ' 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.566 --rc genhtml_branch_coverage=1 00:05:08.566 --rc genhtml_function_coverage=1 00:05:08.566 --rc genhtml_legend=1 00:05:08.566 --rc geninfo_all_blocks=1 00:05:08.566 --rc geninfo_unexecuted_blocks=1 00:05:08.566 00:05:08.566 ' 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.566 --rc genhtml_branch_coverage=1 00:05:08.566 --rc genhtml_function_coverage=1 00:05:08.566 --rc genhtml_legend=1 00:05:08.566 --rc geninfo_all_blocks=1 00:05:08.566 --rc geninfo_unexecuted_blocks=1 00:05:08.566 00:05:08.566 ' 00:05:08.566 18:07:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.566 --rc genhtml_branch_coverage=1 00:05:08.566 --rc genhtml_function_coverage=1 00:05:08.566 --rc genhtml_legend=1 00:05:08.566 --rc geninfo_all_blocks=1 00:05:08.566 --rc geninfo_unexecuted_blocks=1 00:05:08.566 00:05:08.566 ' 00:05:08.566 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.566 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2333513 00:05:08.566 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.566 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.566 18:07:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2333513 00:05:08.567 18:07:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2333513 ']' 00:05:08.567 18:07:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.567 18:07:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.567 18:07:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.567 18:07:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.567 18:07:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.825 [2024-11-28 18:07:20.723517] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:08.825 [2024-11-28 18:07:20.723668] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333513 ] 00:05:08.825 [2024-11-28 18:07:20.856293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.083 [2024-11-28 18:07:20.982643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.083 [2024-11-28 18:07:20.982705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.083 [2024-11-28 18:07:20.982753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.083 [2024-11-28 18:07:20.982763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.649 18:07:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.649 [2024-11-28 18:07:21.701977] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.649 [2024-11-28 18:07:21.702024] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.649 [2024-11-28 18:07:21.702075] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.649 [2024-11-28 18:07:21.702120] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.649 [2024-11-28 18:07:21.702156] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.649 18:07:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.649 18:07:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 [2024-11-28 18:07:22.018974] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.248 18:07:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.248 18:07:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.248 18:07:22 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 ************************************ 00:05:10.248 START TEST scheduler_create_thread 00:05:10.248 ************************************ 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 2 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 3 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 4 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 5 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 6 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 7 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 8 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 9 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 10 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.248 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.841 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.841 00:05:10.841 real 0m0.599s 00:05:10.841 user 0m0.007s 00:05:10.841 sys 0m0.007s 00:05:10.841 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.841 18:07:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.841 ************************************ 00:05:10.841 END TEST scheduler_create_thread 00:05:10.841 ************************************ 00:05:10.841 18:07:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.841 18:07:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2333513 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2333513 ']' 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2333513 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333513 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333513' 00:05:10.841 killing process with pid 2333513 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2333513 00:05:10.841 18:07:22 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2333513 00:05:11.099 [2024-11-28 18:07:23.127636] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:12.475 00:05:12.475 real 0m3.658s 00:05:12.475 user 0m7.557s 00:05:12.475 sys 0m0.486s 00:05:12.475 18:07:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.475 18:07:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:12.475 ************************************ 00:05:12.475 END TEST event_scheduler 00:05:12.475 ************************************ 00:05:12.475 18:07:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:12.475 18:07:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:12.475 18:07:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.475 18:07:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.475 18:07:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.475 ************************************ 00:05:12.475 START TEST app_repeat 00:05:12.475 ************************************ 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2333968 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2333968' 00:05:12.475 Process app_repeat pid: 2333968 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:12.475 spdk_app_start Round 0 00:05:12.475 18:07:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2333968 /var/tmp/spdk-nbd.sock 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2333968 ']' 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.475 18:07:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.475 [2024-11-28 18:07:24.275554] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:12.475 [2024-11-28 18:07:24.275678] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2333968 ] 00:05:12.475 [2024-11-28 18:07:24.416637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.475 [2024-11-28 18:07:24.557885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.475 [2024-11-28 18:07:24.557889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.410 18:07:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.410 18:07:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.410 18:07:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.667 Malloc0 00:05:13.667 18:07:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.926 Malloc1 00:05:13.926 18:07:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.926 18:07:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.492 /dev/nbd0 00:05:14.492 18:07:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.492 18:07:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.492 1+0 records in 00:05:14.492 1+0 records out 00:05:14.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238094 s, 17.2 MB/s 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.492 18:07:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.492 18:07:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.492 18:07:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.492 18:07:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.751 /dev/nbd1 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.751 1+0 records in 00:05:14.751 1+0 records out 00:05:14.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229515 s, 17.8 MB/s 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.751 18:07:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.751 18:07:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.009 { 00:05:15.009 "nbd_device": "/dev/nbd0", 00:05:15.009 "bdev_name": "Malloc0" 00:05:15.009 }, 00:05:15.009 { 00:05:15.009 "nbd_device": "/dev/nbd1", 00:05:15.009 "bdev_name": "Malloc1" 00:05:15.009 } 00:05:15.009 ]' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.009 { 00:05:15.009 "nbd_device": "/dev/nbd0", 00:05:15.009 "bdev_name": "Malloc0" 00:05:15.009 }, 00:05:15.009 { 00:05:15.009 "nbd_device": "/dev/nbd1", 00:05:15.009 "bdev_name": "Malloc1" 00:05:15.009 } 00:05:15.009 ]' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.009 /dev/nbd1' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.009 /dev/nbd1' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.009 256+0 records in 00:05:15.009 256+0 records out 00:05:15.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509388 s, 206 MB/s 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.009 18:07:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.009 256+0 records in 00:05:15.009 256+0 records out 00:05:15.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240219 s, 43.7 MB/s 00:05:15.009 18:07:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.010 256+0 records in 00:05:15.010 256+0 records out 00:05:15.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297023 s, 35.3 MB/s 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.010 18:07:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.268 18:07:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.834 18:07:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.091 18:07:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.091 18:07:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.349 18:07:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.725 [2024-11-28 18:07:29.640571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.725 [2024-11-28 18:07:29.775501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.725 [2024-11-28 18:07:29.775502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.983 [2024-11-28 18:07:29.990794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.983 [2024-11-28 18:07:29.990910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.353 18:07:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.353 18:07:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.353 spdk_app_start Round 1 00:05:19.353 18:07:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2333968 /var/tmp/spdk-nbd.sock 00:05:19.353 18:07:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2333968 ']' 00:05:19.353 18:07:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.353 18:07:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.353 18:07:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.353 18:07:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.353 18:07:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.645 18:07:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.645 18:07:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.645 18:07:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.210 Malloc0 00:05:20.210 18:07:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.468 Malloc1 00:05:20.468 18:07:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.468 18:07:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.726 /dev/nbd0 00:05:20.726 18:07:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.726 18:07:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.726 1+0 records in 00:05:20.726 1+0 records out 00:05:20.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024321 s, 16.8 MB/s 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.726 18:07:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.726 18:07:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.726 18:07:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.726 18:07:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.984 /dev/nbd1 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.984 1+0 records in 00:05:20.984 1+0 records out 00:05:20.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229108 s, 17.9 MB/s 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.984 18:07:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.984 18:07:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.241 18:07:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.241 { 00:05:21.241 "nbd_device": "/dev/nbd0", 00:05:21.241 "bdev_name": "Malloc0" 00:05:21.241 }, 00:05:21.241 { 00:05:21.241 "nbd_device": "/dev/nbd1", 00:05:21.241 "bdev_name": "Malloc1" 00:05:21.241 } 00:05:21.241 ]' 00:05:21.241 18:07:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.241 { 00:05:21.241 "nbd_device": "/dev/nbd0", 00:05:21.241 "bdev_name": "Malloc0" 00:05:21.241 }, 00:05:21.241 { 00:05:21.241 "nbd_device": "/dev/nbd1", 00:05:21.241 "bdev_name": "Malloc1" 00:05:21.241 } 00:05:21.241 ]' 00:05:21.241 18:07:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.498 /dev/nbd1' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.498 /dev/nbd1' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.498 256+0 records in 00:05:21.498 256+0 records out 00:05:21.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502216 s, 209 MB/s 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.498 256+0 records in 00:05:21.498 256+0 records out 00:05:21.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251409 s, 41.7 MB/s 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.498 256+0 records in 00:05:21.498 256+0 records out 00:05:21.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0286176 s, 36.6 MB/s 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.498 18:07:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.756 18:07:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.013 18:07:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.270 18:07:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.270 18:07:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.270 18:07:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.270 18:07:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.271 18:07:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.271 18:07:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.837 18:07:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.212 [2024-11-28 18:07:36.016697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.212 [2024-11-28 18:07:36.150588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.212 [2024-11-28 18:07:36.150588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.473 [2024-11-28 18:07:36.362558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.473 [2024-11-28 18:07:36.362649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.848 18:07:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.848 18:07:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.848 spdk_app_start Round 2 00:05:25.848 18:07:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2333968 /var/tmp/spdk-nbd.sock 00:05:25.848 18:07:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2333968 ']' 00:05:25.848 18:07:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.848 18:07:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.848 18:07:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.848 18:07:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.848 18:07:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.106 18:07:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.106 18:07:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.106 18:07:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.364 Malloc0 00:05:26.364 18:07:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.931 Malloc1 00:05:26.931 18:07:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.931 18:07:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.189 /dev/nbd0 00:05:27.190 18:07:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.190 18:07:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.190 1+0 records in 00:05:27.190 1+0 records out 00:05:27.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282481 s, 14.5 MB/s 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.190 18:07:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.190 18:07:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.190 18:07:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.190 18:07:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.448 /dev/nbd1 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.448 1+0 records in 00:05:27.448 1+0 records out 00:05:27.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234543 s, 17.5 MB/s 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.448 18:07:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.448 18:07:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.708 { 00:05:27.708 "nbd_device": "/dev/nbd0", 00:05:27.708 "bdev_name": "Malloc0" 00:05:27.708 }, 00:05:27.708 { 00:05:27.708 "nbd_device": "/dev/nbd1", 00:05:27.708 "bdev_name": "Malloc1" 00:05:27.708 } 00:05:27.708 ]' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.708 { 00:05:27.708 "nbd_device": "/dev/nbd0", 00:05:27.708 "bdev_name": "Malloc0" 00:05:27.708 }, 00:05:27.708 { 00:05:27.708 "nbd_device": "/dev/nbd1", 00:05:27.708 "bdev_name": "Malloc1" 00:05:27.708 } 00:05:27.708 ]' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.708 /dev/nbd1' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.708 /dev/nbd1' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.708 256+0 records in 00:05:27.708 256+0 records out 00:05:27.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518325 s, 202 MB/s 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.708 256+0 records in 00:05:27.708 256+0 records out 00:05:27.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252352 s, 41.6 MB/s 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.708 256+0 records in 00:05:27.708 256+0 records out 00:05:27.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288974 s, 36.3 MB/s 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.708 18:07:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.274 18:07:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.275 18:07:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.533 18:07:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.791 18:07:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.791 18:07:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.358 18:07:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.295 [2024-11-28 18:07:42.387571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.553 [2024-11-28 18:07:42.522301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.553 [2024-11-28 18:07:42.522303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.811 [2024-11-28 18:07:42.738199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.811 [2024-11-28 18:07:42.738284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.212 18:07:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2333968 /var/tmp/spdk-nbd.sock 00:05:32.212 18:07:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2333968 ']' 00:05:32.212 18:07:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.212 18:07:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.212 18:07:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.212 18:07:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.212 18:07:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:32.470 18:07:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2333968 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2333968 ']' 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2333968 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333968 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333968' 00:05:32.470 killing process with pid 2333968 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2333968 00:05:32.470 18:07:44 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2333968 00:05:33.845 spdk_app_start is called in Round 0. 00:05:33.845 Shutdown signal received, stop current app iteration 00:05:33.845 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:33.845 spdk_app_start is called in Round 1. 00:05:33.845 Shutdown signal received, stop current app iteration 00:05:33.845 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:33.845 spdk_app_start is called in Round 2. 00:05:33.845 Shutdown signal received, stop current app iteration 00:05:33.845 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:05:33.845 spdk_app_start is called in Round 3. 00:05:33.845 Shutdown signal received, stop current app iteration 00:05:33.845 18:07:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.845 18:07:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:33.845 00:05:33.845 real 0m21.317s 00:05:33.845 user 0m45.418s 00:05:33.845 sys 0m3.414s 00:05:33.845 18:07:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.845 18:07:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.845 ************************************ 00:05:33.845 END TEST app_repeat 00:05:33.845 ************************************ 00:05:33.845 18:07:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.845 18:07:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:33.845 18:07:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.845 18:07:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.845 18:07:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.845 ************************************ 00:05:33.845 START TEST cpu_locks 00:05:33.845 ************************************ 00:05:33.845 18:07:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:33.845 * Looking for test storage... 00:05:33.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:33.845 18:07:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.845 18:07:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.845 18:07:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.845 18:07:45 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.845 18:07:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.846 18:07:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.846 --rc genhtml_branch_coverage=1 00:05:33.846 --rc genhtml_function_coverage=1 00:05:33.846 --rc genhtml_legend=1 00:05:33.846 --rc geninfo_all_blocks=1 00:05:33.846 --rc geninfo_unexecuted_blocks=1 00:05:33.846 00:05:33.846 ' 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.846 --rc genhtml_branch_coverage=1 00:05:33.846 --rc genhtml_function_coverage=1 00:05:33.846 --rc genhtml_legend=1 00:05:33.846 --rc geninfo_all_blocks=1 00:05:33.846 --rc geninfo_unexecuted_blocks=1 00:05:33.846 00:05:33.846 ' 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.846 --rc genhtml_branch_coverage=1 00:05:33.846 --rc genhtml_function_coverage=1 00:05:33.846 --rc genhtml_legend=1 00:05:33.846 --rc geninfo_all_blocks=1 00:05:33.846 --rc geninfo_unexecuted_blocks=1 00:05:33.846 00:05:33.846 ' 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.846 --rc genhtml_branch_coverage=1 00:05:33.846 --rc genhtml_function_coverage=1 00:05:33.846 --rc genhtml_legend=1 00:05:33.846 --rc geninfo_all_blocks=1 00:05:33.846 --rc geninfo_unexecuted_blocks=1 00:05:33.846 00:05:33.846 ' 00:05:33.846 18:07:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.846 18:07:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.846 18:07:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.846 18:07:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.846 18:07:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.846 ************************************ 00:05:33.846 START TEST default_locks 00:05:33.846 ************************************ 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2336730 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2336730 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2336730 ']' 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.846 18:07:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.846 [2024-11-28 18:07:45.857481] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:33.846 [2024-11-28 18:07:45.857622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336730 ] 00:05:34.104 [2024-11-28 18:07:46.000407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.104 [2024-11-28 18:07:46.137745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.039 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.039 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:35.039 18:07:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2336730 00:05:35.039 18:07:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2336730 00:05:35.039 18:07:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.605 lslocks: write error 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2336730 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2336730 ']' 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2336730 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336730 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336730' 00:05:35.605 killing process with pid 2336730 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2336730 00:05:35.605 18:07:47 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2336730 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2336730 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2336730 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2336730 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2336730 ']' 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2336730) - No such process 00:05:38.263 ERROR: process (pid: 2336730) is no longer running 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.263 00:05:38.263 real 0m4.163s 00:05:38.263 user 0m4.130s 00:05:38.263 sys 0m0.779s 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.263 18:07:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.263 ************************************ 00:05:38.263 END TEST default_locks 00:05:38.263 ************************************ 00:05:38.263 18:07:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.263 18:07:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.263 18:07:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.263 18:07:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.263 ************************************ 00:05:38.263 START TEST default_locks_via_rpc 00:05:38.263 ************************************ 00:05:38.263 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:38.263 18:07:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2337293 00:05:38.263 18:07:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.263 18:07:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2337293 00:05:38.263 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2337293 ']' 00:05:38.264 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.264 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.264 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.264 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.264 18:07:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.264 [2024-11-28 18:07:50.078781] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:38.264 [2024-11-28 18:07:50.078935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337293 ] 00:05:38.264 [2024-11-28 18:07:50.217928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.264 [2024-11-28 18:07:50.350468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.196 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.455 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.455 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2337293 00:05:39.455 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2337293 00:05:39.455 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2337293 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2337293 ']' 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2337293 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337293 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337293' 00:05:39.713 killing process with pid 2337293 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2337293 00:05:39.713 18:07:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2337293 00:05:42.243 00:05:42.243 real 0m4.094s 00:05:42.243 user 0m4.125s 00:05:42.243 sys 0m0.733s 00:05:42.243 18:07:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.244 18:07:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.244 ************************************ 00:05:42.244 END TEST default_locks_via_rpc 00:05:42.244 ************************************ 00:05:42.244 18:07:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:42.244 18:07:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.244 18:07:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.244 18:07:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.244 ************************************ 00:05:42.244 START TEST non_locking_app_on_locked_coremask 00:05:42.244 ************************************ 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2337726 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2337726 /var/tmp/spdk.sock 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2337726 ']' 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.244 18:07:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.244 [2024-11-28 18:07:54.214645] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:42.244 [2024-11-28 18:07:54.214795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337726 ] 00:05:42.501 [2024-11-28 18:07:54.353960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.501 [2024-11-28 18:07:54.486018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2337983 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2337983 /var/tmp/spdk2.sock 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2337983 ']' 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.462 18:07:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.462 [2024-11-28 18:07:55.552360] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:43.462 [2024-11-28 18:07:55.552533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337983 ] 00:05:43.720 [2024-11-28 18:07:55.762408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.720 [2024-11-28 18:07:55.762487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.979 [2024-11-28 18:07:56.042417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2337726 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2337726 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.508 lslocks: write error 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2337726 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2337726 ']' 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2337726 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.508 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337726 00:05:46.766 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.766 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.766 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337726' 00:05:46.766 killing process with pid 2337726 00:05:46.766 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2337726 00:05:46.766 18:07:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2337726 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2337983 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2337983 ']' 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2337983 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337983 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.028 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337983' 00:05:52.029 killing process with pid 2337983 00:05:52.029 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2337983 00:05:52.029 18:08:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2337983 00:05:53.927 00:05:53.927 real 0m11.877s 00:05:53.927 user 0m12.267s 00:05:53.927 sys 0m1.458s 00:05:53.927 18:08:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.927 18:08:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.927 ************************************ 00:05:53.927 END TEST non_locking_app_on_locked_coremask 00:05:53.927 ************************************ 00:05:53.927 18:08:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.927 18:08:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.927 18:08:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.927 18:08:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.186 ************************************ 00:05:54.186 START TEST locking_app_on_unlocked_coremask 00:05:54.186 ************************************ 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2339226 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2339226 /var/tmp/spdk.sock 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2339226 ']' 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.186 18:08:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.186 [2024-11-28 18:08:06.140670] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:54.186 [2024-11-28 18:08:06.140821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2339226 ] 00:05:54.186 [2024-11-28 18:08:06.278266] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.186 [2024-11-28 18:08:06.278346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.445 [2024-11-28 18:08:06.413891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.379 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.379 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.379 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2339365 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2339365 /var/tmp/spdk2.sock 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2339365 ']' 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.380 18:08:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.380 [2024-11-28 18:08:07.468979] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:05:55.380 [2024-11-28 18:08:07.469127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2339365 ] 00:05:55.638 [2024-11-28 18:08:07.674596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.896 [2024-11-28 18:08:07.958462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.426 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.426 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.426 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2339365 00:05:58.426 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2339365 00:05:58.426 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.685 lslocks: write error 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2339226 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2339226 ']' 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2339226 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2339226 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2339226' 00:05:58.685 killing process with pid 2339226 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2339226 00:05:58.685 18:08:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2339226 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2339365 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2339365 ']' 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2339365 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2339365 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2339365' 00:06:03.951 killing process with pid 2339365 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2339365 00:06:03.951 18:08:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2339365 00:06:05.852 00:06:05.852 real 0m11.811s 00:06:05.852 user 0m12.218s 00:06:05.852 sys 0m1.445s 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.852 ************************************ 00:06:05.852 END TEST locking_app_on_unlocked_coremask 00:06:05.852 ************************************ 00:06:05.852 18:08:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.852 18:08:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.852 18:08:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.852 18:08:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.852 ************************************ 00:06:05.852 START TEST locking_app_on_locked_coremask 00:06:05.852 ************************************ 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2340598 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2340598 /var/tmp/spdk.sock 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2340598 ']' 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.852 18:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.110 [2024-11-28 18:08:18.001016] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:06.110 [2024-11-28 18:08:18.001191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340598 ] 00:06:06.110 [2024-11-28 18:08:18.147635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.368 [2024-11-28 18:08:18.284540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2340853 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2340853 /var/tmp/spdk2.sock 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2340853 /var/tmp/spdk2.sock 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2340853 /var/tmp/spdk2.sock 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2340853 ']' 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.301 18:08:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.301 [2024-11-28 18:08:19.344890] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:07.301 [2024-11-28 18:08:19.345035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340853 ] 00:06:07.559 [2024-11-28 18:08:19.552595] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2340598 has claimed it. 00:06:07.559 [2024-11-28 18:08:19.552687] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2340853) - No such process 00:06:08.121 ERROR: process (pid: 2340853) is no longer running 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2340598 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2340598 00:06:08.121 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.379 lslocks: write error 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2340598 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2340598 ']' 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2340598 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2340598 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2340598' 00:06:08.379 killing process with pid 2340598 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2340598 00:06:08.379 18:08:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2340598 00:06:10.906 00:06:10.906 real 0m4.923s 00:06:10.906 user 0m5.165s 00:06:10.906 sys 0m0.931s 00:06:10.906 18:08:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.906 18:08:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.906 ************************************ 00:06:10.906 END TEST locking_app_on_locked_coremask 00:06:10.906 ************************************ 00:06:10.906 18:08:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.906 18:08:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.906 18:08:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.906 18:08:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.906 ************************************ 00:06:10.906 START TEST locking_overlapped_coremask 00:06:10.906 ************************************ 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2341290 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2341290 /var/tmp/spdk.sock 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2341290 ']' 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.906 18:08:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.906 [2024-11-28 18:08:22.973486] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:10.906 [2024-11-28 18:08:22.973641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341290 ] 00:06:11.165 [2024-11-28 18:08:23.121992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.423 [2024-11-28 18:08:23.267765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.423 [2024-11-28 18:08:23.267819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.423 [2024-11-28 18:08:23.267826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2341437 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2341437 /var/tmp/spdk2.sock 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2341437 /var/tmp/spdk2.sock 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2341437 /var/tmp/spdk2.sock 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2341437 ']' 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.389 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.389 [2024-11-28 18:08:24.313727] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:12.389 [2024-11-28 18:08:24.313893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341437 ] 00:06:12.672 [2024-11-28 18:08:24.511356] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2341290 has claimed it. 00:06:12.672 [2024-11-28 18:08:24.511452] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2341437) - No such process 00:06:12.930 ERROR: process (pid: 2341437) is no longer running 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2341290 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2341290 ']' 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2341290 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.930 18:08:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341290 00:06:13.189 18:08:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.189 18:08:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.189 18:08:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341290' 00:06:13.189 killing process with pid 2341290 00:06:13.189 18:08:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2341290 00:06:13.189 18:08:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2341290 00:06:15.720 00:06:15.720 real 0m4.369s 00:06:15.720 user 0m11.820s 00:06:15.720 sys 0m0.802s 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 ************************************ 00:06:15.720 END TEST locking_overlapped_coremask 00:06:15.720 ************************************ 00:06:15.720 18:08:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.720 18:08:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.720 18:08:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.720 18:08:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 ************************************ 00:06:15.720 START TEST locking_overlapped_coremask_via_rpc 00:06:15.720 ************************************ 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2341863 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2341863 /var/tmp/spdk.sock 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2341863 ']' 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.720 18:08:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.720 [2024-11-28 18:08:27.409166] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:15.720 [2024-11-28 18:08:27.409323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2341863 ] 00:06:15.720 [2024-11-28 18:08:27.583508] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.720 [2024-11-28 18:08:27.583595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.720 [2024-11-28 18:08:27.731370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.720 [2024-11-28 18:08:27.731430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.720 [2024-11-28 18:08:27.731440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2342006 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2342006 /var/tmp/spdk2.sock 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2342006 ']' 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.656 18:08:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.914 [2024-11-28 18:08:28.812677] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:16.914 [2024-11-28 18:08:28.812829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2342006 ] 00:06:16.914 [2024-11-28 18:08:29.008833] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.914 [2024-11-28 18:08:29.008905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.172 [2024-11-28 18:08:29.270596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.172 [2024-11-28 18:08:29.270647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.172 [2024-11-28 18:08:29.270654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.702 [2024-11-28 18:08:31.539268] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2341863 has claimed it. 00:06:19.702 request: 00:06:19.702 { 00:06:19.702 "method": "framework_enable_cpumask_locks", 00:06:19.702 "req_id": 1 00:06:19.702 } 00:06:19.702 Got JSON-RPC error response 00:06:19.702 response: 00:06:19.702 { 00:06:19.702 "code": -32603, 00:06:19.702 "message": "Failed to claim CPU core: 2" 00:06:19.702 } 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2341863 /var/tmp/spdk.sock 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2341863 ']' 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.702 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2342006 /var/tmp/spdk2.sock 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2342006 ']' 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.960 18:08:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.218 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.218 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.218 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.218 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.218 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.219 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.219 00:06:20.219 real 0m4.814s 00:06:20.219 user 0m1.698s 00:06:20.219 sys 0m0.265s 00:06:20.219 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.219 18:08:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.219 ************************************ 00:06:20.219 END TEST locking_overlapped_coremask_via_rpc 00:06:20.219 ************************************ 00:06:20.219 18:08:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.219 18:08:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2341863 ]] 00:06:20.219 18:08:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2341863 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2341863 ']' 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2341863 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2341863 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2341863' 00:06:20.219 killing process with pid 2341863 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2341863 00:06:20.219 18:08:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2341863 00:06:22.748 18:08:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2342006 ]] 00:06:22.748 18:08:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2342006 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2342006 ']' 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2342006 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2342006 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2342006' 00:06:22.748 killing process with pid 2342006 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2342006 00:06:22.748 18:08:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2342006 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2341863 ]] 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2341863 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2341863 ']' 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2341863 00:06:24.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2341863) - No such process 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2341863 is not found' 00:06:24.651 Process with pid 2341863 is not found 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2342006 ]] 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2342006 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2342006 ']' 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2342006 00:06:24.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2342006) - No such process 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2342006 is not found' 00:06:24.651 Process with pid 2342006 is not found 00:06:24.651 18:08:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.651 00:06:24.651 real 0m51.073s 00:06:24.651 user 1m27.448s 00:06:24.651 sys 0m7.730s 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.651 18:08:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.651 ************************************ 00:06:24.651 END TEST cpu_locks 00:06:24.651 ************************************ 00:06:24.651 00:06:24.651 real 1m21.232s 00:06:24.651 user 2m27.940s 00:06:24.651 sys 0m12.288s 00:06:24.651 18:08:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.651 18:08:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.651 ************************************ 00:06:24.651 END TEST event 00:06:24.651 ************************************ 00:06:24.651 18:08:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:24.651 18:08:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.651 18:08:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.651 18:08:36 -- common/autotest_common.sh@10 -- # set +x 00:06:24.651 ************************************ 00:06:24.651 START TEST thread 00:06:24.651 ************************************ 00:06:24.651 18:08:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:24.910 * Looking for test storage... 00:06:24.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.910 18:08:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.910 18:08:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.910 18:08:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.910 18:08:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.910 18:08:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.910 18:08:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.910 18:08:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.910 18:08:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.910 18:08:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.910 18:08:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.910 18:08:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.910 18:08:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.910 18:08:36 thread -- scripts/common.sh@345 -- # : 1 00:06:24.910 18:08:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.910 18:08:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.910 18:08:36 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.910 18:08:36 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.910 18:08:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.910 18:08:36 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.910 18:08:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.910 18:08:36 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.910 18:08:36 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.910 18:08:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.910 18:08:36 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.910 18:08:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.910 18:08:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.910 18:08:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.910 18:08:36 thread -- scripts/common.sh@368 -- # return 0 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.910 --rc genhtml_branch_coverage=1 00:06:24.910 --rc genhtml_function_coverage=1 00:06:24.910 --rc genhtml_legend=1 00:06:24.910 --rc geninfo_all_blocks=1 00:06:24.910 --rc geninfo_unexecuted_blocks=1 00:06:24.910 00:06:24.910 ' 00:06:24.910 18:08:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.910 18:08:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.910 ************************************ 00:06:24.910 START TEST thread_poller_perf 00:06:24.910 ************************************ 00:06:24.910 18:08:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.910 [2024-11-28 18:08:36.933073] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:24.911 [2024-11-28 18:08:36.933217] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343052 ] 00:06:25.169 [2024-11-28 18:08:37.072426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.169 [2024-11-28 18:08:37.210791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.169 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.544 [2024-11-28T17:08:38.644Z] ====================================== 00:06:26.544 [2024-11-28T17:08:38.644Z] busy:2716879874 (cyc) 00:06:26.544 [2024-11-28T17:08:38.644Z] total_run_count: 291000 00:06:26.544 [2024-11-28T17:08:38.644Z] tsc_hz: 2700000000 (cyc) 00:06:26.544 [2024-11-28T17:08:38.644Z] ====================================== 00:06:26.544 [2024-11-28T17:08:38.644Z] poller_cost: 9336 (cyc), 3457 (nsec) 00:06:26.544 00:06:26.544 real 0m1.585s 00:06:26.544 user 0m1.426s 00:06:26.544 sys 0m0.151s 00:06:26.544 18:08:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.544 18:08:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.544 ************************************ 00:06:26.544 END TEST thread_poller_perf 00:06:26.544 ************************************ 00:06:26.544 18:08:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.544 18:08:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.544 18:08:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.544 18:08:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.544 ************************************ 00:06:26.544 START TEST thread_poller_perf 00:06:26.544 ************************************ 00:06:26.544 18:08:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.544 [2024-11-28 18:08:38.574293] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:26.544 [2024-11-28 18:08:38.574423] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343326 ] 00:06:26.803 [2024-11-28 18:08:38.734752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.803 [2024-11-28 18:08:38.872472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.803 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:28.188 [2024-11-28T17:08:40.288Z] ====================================== 00:06:28.188 [2024-11-28T17:08:40.288Z] busy:2704747036 (cyc) 00:06:28.188 [2024-11-28T17:08:40.288Z] total_run_count: 3605000 00:06:28.188 [2024-11-28T17:08:40.288Z] tsc_hz: 2700000000 (cyc) 00:06:28.188 [2024-11-28T17:08:40.288Z] ====================================== 00:06:28.188 [2024-11-28T17:08:40.288Z] poller_cost: 750 (cyc), 277 (nsec) 00:06:28.188 00:06:28.188 real 0m1.583s 00:06:28.188 user 0m1.415s 00:06:28.188 sys 0m0.159s 00:06:28.188 18:08:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.188 18:08:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.188 ************************************ 00:06:28.188 END TEST thread_poller_perf 00:06:28.188 ************************************ 00:06:28.188 18:08:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:28.188 00:06:28.188 real 0m3.411s 00:06:28.188 user 0m2.971s 00:06:28.188 sys 0m0.438s 00:06:28.188 18:08:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.188 18:08:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.188 ************************************ 00:06:28.188 END TEST thread 00:06:28.188 ************************************ 00:06:28.188 18:08:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:28.188 18:08:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:28.188 18:08:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.188 18:08:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.188 18:08:40 -- common/autotest_common.sh@10 -- # set +x 00:06:28.188 ************************************ 00:06:28.188 START TEST app_cmdline 00:06:28.188 ************************************ 00:06:28.188 18:08:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:28.188 * Looking for test storage... 00:06:28.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:28.188 18:08:40 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.188 18:08:40 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.188 18:08:40 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.447 18:08:40 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:28.447 18:08:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.448 18:08:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.448 --rc genhtml_branch_coverage=1 00:06:28.448 --rc genhtml_function_coverage=1 00:06:28.448 --rc genhtml_legend=1 00:06:28.448 --rc geninfo_all_blocks=1 00:06:28.448 --rc geninfo_unexecuted_blocks=1 00:06:28.448 00:06:28.448 ' 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.448 --rc genhtml_branch_coverage=1 00:06:28.448 --rc genhtml_function_coverage=1 00:06:28.448 --rc genhtml_legend=1 00:06:28.448 --rc geninfo_all_blocks=1 00:06:28.448 --rc geninfo_unexecuted_blocks=1 00:06:28.448 00:06:28.448 ' 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.448 --rc genhtml_branch_coverage=1 00:06:28.448 --rc genhtml_function_coverage=1 00:06:28.448 --rc genhtml_legend=1 00:06:28.448 --rc geninfo_all_blocks=1 00:06:28.448 --rc geninfo_unexecuted_blocks=1 00:06:28.448 00:06:28.448 ' 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.448 --rc genhtml_branch_coverage=1 00:06:28.448 --rc genhtml_function_coverage=1 00:06:28.448 --rc genhtml_legend=1 00:06:28.448 --rc geninfo_all_blocks=1 00:06:28.448 --rc geninfo_unexecuted_blocks=1 00:06:28.448 00:06:28.448 ' 00:06:28.448 18:08:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:28.448 18:08:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2343537 00:06:28.448 18:08:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:28.448 18:08:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2343537 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2343537 ']' 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.448 18:08:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.448 [2024-11-28 18:08:40.439673] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:28.448 [2024-11-28 18:08:40.439810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343537 ] 00:06:28.706 [2024-11-28 18:08:40.583745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.706 [2024-11-28 18:08:40.721785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.640 18:08:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.640 18:08:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:29.640 18:08:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:29.898 { 00:06:29.898 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:29.898 "fields": { 00:06:29.898 "major": 25, 00:06:29.898 "minor": 1, 00:06:29.898 "patch": 0, 00:06:29.898 "suffix": "-pre", 00:06:29.898 "commit": "35cd3e84d" 00:06:29.898 } 00:06:29.898 } 00:06:29.898 18:08:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:29.898 18:08:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:29.899 18:08:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:29.899 18:08:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.157 18:08:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.157 18:08:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.157 18:08:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.157 18:08:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 18:08:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.157 18:08:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.157 18:08:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.157 18:08:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:30.157 18:08:42 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.416 request: 00:06:30.416 { 00:06:30.416 "method": "env_dpdk_get_mem_stats", 00:06:30.416 "req_id": 1 00:06:30.416 } 00:06:30.416 Got JSON-RPC error response 00:06:30.416 response: 00:06:30.416 { 00:06:30.416 "code": -32601, 00:06:30.416 "message": "Method not found" 00:06:30.416 } 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.416 18:08:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2343537 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2343537 ']' 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2343537 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343537 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343537' 00:06:30.416 killing process with pid 2343537 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 2343537 00:06:30.416 18:08:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 2343537 00:06:32.947 00:06:32.947 real 0m4.620s 00:06:32.947 user 0m5.142s 00:06:32.947 sys 0m0.694s 00:06:32.947 18:08:44 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.947 18:08:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 ************************************ 00:06:32.947 END TEST app_cmdline 00:06:32.947 ************************************ 00:06:32.947 18:08:44 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.947 18:08:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.947 18:08:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.947 18:08:44 -- common/autotest_common.sh@10 -- # set +x 00:06:32.947 ************************************ 00:06:32.947 START TEST version 00:06:32.947 ************************************ 00:06:32.947 18:08:44 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.947 * Looking for test storage... 00:06:32.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.947 18:08:44 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.947 18:08:44 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.947 18:08:44 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.947 18:08:44 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.947 18:08:44 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.947 18:08:44 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.947 18:08:44 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.947 18:08:44 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.947 18:08:44 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.947 18:08:44 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.947 18:08:44 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.947 18:08:44 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.947 18:08:44 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.947 18:08:44 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.947 18:08:44 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.947 18:08:44 version -- scripts/common.sh@344 -- # case "$op" in 00:06:32.947 18:08:44 version -- scripts/common.sh@345 -- # : 1 00:06:32.947 18:08:44 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.947 18:08:44 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.947 18:08:44 version -- scripts/common.sh@365 -- # decimal 1 00:06:32.947 18:08:44 version -- scripts/common.sh@353 -- # local d=1 00:06:32.947 18:08:44 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.947 18:08:45 version -- scripts/common.sh@355 -- # echo 1 00:06:32.947 18:08:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.947 18:08:45 version -- scripts/common.sh@366 -- # decimal 2 00:06:32.947 18:08:45 version -- scripts/common.sh@353 -- # local d=2 00:06:32.947 18:08:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.947 18:08:45 version -- scripts/common.sh@355 -- # echo 2 00:06:32.947 18:08:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.947 18:08:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.947 18:08:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.947 18:08:45 version -- scripts/common.sh@368 -- # return 0 00:06:32.947 18:08:45 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.947 18:08:45 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.947 --rc genhtml_branch_coverage=1 00:06:32.947 --rc genhtml_function_coverage=1 00:06:32.947 --rc genhtml_legend=1 00:06:32.947 --rc geninfo_all_blocks=1 00:06:32.947 --rc geninfo_unexecuted_blocks=1 00:06:32.947 00:06:32.947 ' 00:06:32.947 18:08:45 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.947 --rc genhtml_branch_coverage=1 00:06:32.947 --rc genhtml_function_coverage=1 00:06:32.947 --rc genhtml_legend=1 00:06:32.947 --rc geninfo_all_blocks=1 00:06:32.947 --rc geninfo_unexecuted_blocks=1 00:06:32.947 00:06:32.947 ' 00:06:32.947 18:08:45 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.947 --rc genhtml_branch_coverage=1 00:06:32.947 --rc genhtml_function_coverage=1 00:06:32.947 --rc genhtml_legend=1 00:06:32.947 --rc geninfo_all_blocks=1 00:06:32.947 --rc geninfo_unexecuted_blocks=1 00:06:32.947 00:06:32.947 ' 00:06:32.947 18:08:45 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.947 --rc genhtml_branch_coverage=1 00:06:32.947 --rc genhtml_function_coverage=1 00:06:32.947 --rc genhtml_legend=1 00:06:32.947 --rc geninfo_all_blocks=1 00:06:32.947 --rc geninfo_unexecuted_blocks=1 00:06:32.947 00:06:32.947 ' 00:06:32.947 18:08:45 version -- app/version.sh@17 -- # get_header_version major 00:06:32.947 18:08:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # cut -f2 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.947 18:08:45 version -- app/version.sh@17 -- # major=25 00:06:32.947 18:08:45 version -- app/version.sh@18 -- # get_header_version minor 00:06:32.947 18:08:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # cut -f2 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.947 18:08:45 version -- app/version.sh@18 -- # minor=1 00:06:32.947 18:08:45 version -- app/version.sh@19 -- # get_header_version patch 00:06:32.947 18:08:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # cut -f2 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.947 18:08:45 version -- app/version.sh@19 -- # patch=0 00:06:32.947 18:08:45 version -- app/version.sh@20 -- # get_header_version suffix 00:06:32.947 18:08:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # cut -f2 00:06:32.947 18:08:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.947 18:08:45 version -- app/version.sh@20 -- # suffix=-pre 00:06:32.947 18:08:45 version -- app/version.sh@22 -- # version=25.1 00:06:32.947 18:08:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.947 18:08:45 version -- app/version.sh@28 -- # version=25.1rc0 00:06:32.948 18:08:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.948 18:08:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:33.209 18:08:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:33.209 18:08:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:33.209 00:06:33.209 real 0m0.197s 00:06:33.209 user 0m0.136s 00:06:33.209 sys 0m0.089s 00:06:33.209 18:08:45 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.209 18:08:45 version -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 ************************************ 00:06:33.209 END TEST version 00:06:33.209 ************************************ 00:06:33.209 18:08:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:33.209 18:08:45 -- spdk/autotest.sh@194 -- # uname -s 00:06:33.209 18:08:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:33.209 18:08:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:33.209 18:08:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:33.209 18:08:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:33.209 18:08:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.209 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 18:08:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:33.209 18:08:45 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:33.209 18:08:45 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.209 18:08:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.209 18:08:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.209 18:08:45 -- common/autotest_common.sh@10 -- # set +x 00:06:33.209 ************************************ 00:06:33.209 START TEST nvmf_tcp 00:06:33.209 ************************************ 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.209 * Looking for test storage... 00:06:33.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.209 18:08:45 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.209 --rc genhtml_branch_coverage=1 00:06:33.209 --rc genhtml_function_coverage=1 00:06:33.209 --rc genhtml_legend=1 00:06:33.209 --rc geninfo_all_blocks=1 00:06:33.209 --rc geninfo_unexecuted_blocks=1 00:06:33.209 00:06:33.209 ' 00:06:33.209 18:08:45 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.209 --rc genhtml_branch_coverage=1 00:06:33.209 --rc genhtml_function_coverage=1 00:06:33.209 --rc genhtml_legend=1 00:06:33.209 --rc geninfo_all_blocks=1 00:06:33.210 --rc geninfo_unexecuted_blocks=1 00:06:33.210 00:06:33.210 ' 00:06:33.210 18:08:45 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.210 --rc genhtml_branch_coverage=1 00:06:33.210 --rc genhtml_function_coverage=1 00:06:33.210 --rc genhtml_legend=1 00:06:33.210 --rc geninfo_all_blocks=1 00:06:33.210 --rc geninfo_unexecuted_blocks=1 00:06:33.210 00:06:33.210 ' 00:06:33.210 18:08:45 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.210 --rc genhtml_branch_coverage=1 00:06:33.210 --rc genhtml_function_coverage=1 00:06:33.210 --rc genhtml_legend=1 00:06:33.210 --rc geninfo_all_blocks=1 00:06:33.210 --rc geninfo_unexecuted_blocks=1 00:06:33.210 00:06:33.210 ' 00:06:33.210 18:08:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:33.210 18:08:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.210 18:08:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.210 18:08:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.210 18:08:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.210 18:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.468 ************************************ 00:06:33.468 START TEST nvmf_target_core 00:06:33.468 ************************************ 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:33.468 * Looking for test storage... 00:06:33.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.468 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.469 --rc genhtml_branch_coverage=1 00:06:33.469 --rc genhtml_function_coverage=1 00:06:33.469 --rc genhtml_legend=1 00:06:33.469 --rc geninfo_all_blocks=1 00:06:33.469 --rc geninfo_unexecuted_blocks=1 00:06:33.469 00:06:33.469 ' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.469 --rc genhtml_branch_coverage=1 00:06:33.469 --rc genhtml_function_coverage=1 00:06:33.469 --rc genhtml_legend=1 00:06:33.469 --rc geninfo_all_blocks=1 00:06:33.469 --rc geninfo_unexecuted_blocks=1 00:06:33.469 00:06:33.469 ' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.469 --rc genhtml_branch_coverage=1 00:06:33.469 --rc genhtml_function_coverage=1 00:06:33.469 --rc genhtml_legend=1 00:06:33.469 --rc geninfo_all_blocks=1 00:06:33.469 --rc geninfo_unexecuted_blocks=1 00:06:33.469 00:06:33.469 ' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.469 --rc genhtml_branch_coverage=1 00:06:33.469 --rc genhtml_function_coverage=1 00:06:33.469 --rc genhtml_legend=1 00:06:33.469 --rc geninfo_all_blocks=1 00:06:33.469 --rc geninfo_unexecuted_blocks=1 00:06:33.469 00:06:33.469 ' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.469 ************************************ 00:06:33.469 START TEST nvmf_abort 00:06:33.469 ************************************ 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:33.469 * Looking for test storage... 00:06:33.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.469 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.728 --rc genhtml_branch_coverage=1 00:06:33.728 --rc genhtml_function_coverage=1 00:06:33.728 --rc genhtml_legend=1 00:06:33.728 --rc geninfo_all_blocks=1 00:06:33.728 --rc geninfo_unexecuted_blocks=1 00:06:33.728 00:06:33.728 ' 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.728 --rc genhtml_branch_coverage=1 00:06:33.728 --rc genhtml_function_coverage=1 00:06:33.728 --rc genhtml_legend=1 00:06:33.728 --rc geninfo_all_blocks=1 00:06:33.728 --rc geninfo_unexecuted_blocks=1 00:06:33.728 00:06:33.728 ' 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.728 --rc genhtml_branch_coverage=1 00:06:33.728 --rc genhtml_function_coverage=1 00:06:33.728 --rc genhtml_legend=1 00:06:33.728 --rc geninfo_all_blocks=1 00:06:33.728 --rc geninfo_unexecuted_blocks=1 00:06:33.728 00:06:33.728 ' 00:06:33.728 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.728 --rc genhtml_branch_coverage=1 00:06:33.728 --rc genhtml_function_coverage=1 00:06:33.728 --rc genhtml_legend=1 00:06:33.728 --rc geninfo_all_blocks=1 00:06:33.728 --rc geninfo_unexecuted_blocks=1 00:06:33.728 00:06:33.729 ' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.729 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:35.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:35.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:35.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:35.632 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:35.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.633 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:06:35.893 00:06:35.893 --- 10.0.0.2 ping statistics --- 00:06:35.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.893 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:06:35.893 00:06:35.893 --- 10.0.0.1 ping statistics --- 00:06:35.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.893 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:06:35.893 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2346016 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2346016 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2346016 ']' 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.894 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.894 [2024-11-28 18:08:47.948029] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:35.894 [2024-11-28 18:08:47.948214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.152 [2024-11-28 18:08:48.107828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.411 [2024-11-28 18:08:48.254265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.411 [2024-11-28 18:08:48.254339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.411 [2024-11-28 18:08:48.254366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.411 [2024-11-28 18:08:48.254390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.411 [2024-11-28 18:08:48.254410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.411 [2024-11-28 18:08:48.258139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.411 [2024-11-28 18:08:48.258210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.411 [2024-11-28 18:08:48.258210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.979 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.979 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:36.979 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 [2024-11-28 18:08:48.980968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.980 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:36.980 Malloc0 00:06:36.980 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.980 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:36.980 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.980 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.239 Delay0 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.239 [2024-11-28 18:08:49.102754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.239 18:08:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:37.239 [2024-11-28 18:08:49.321251] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:39.771 Initializing NVMe Controllers 00:06:39.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.771 controller IO queue size 128 less than required 00:06:39.771 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:39.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:39.771 Initialization complete. Launching workers. 00:06:39.771 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 20817 00:06:39.771 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 20874, failed to submit 66 00:06:39.771 success 20817, unsuccessful 57, failed 0 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.771 rmmod nvme_tcp 00:06:39.771 rmmod nvme_fabrics 00:06:39.771 rmmod nvme_keyring 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2346016 ']' 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2346016 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2346016 ']' 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2346016 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2346016 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2346016' 00:06:39.771 killing process with pid 2346016 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2346016 00:06:39.771 18:08:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2346016 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.145 18:08:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:43.053 00:06:43.053 real 0m9.430s 00:06:43.053 user 0m15.908s 00:06:43.053 sys 0m2.871s 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.053 ************************************ 00:06:43.053 END TEST nvmf_abort 00:06:43.053 ************************************ 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.053 ************************************ 00:06:43.053 START TEST nvmf_ns_hotplug_stress 00:06:43.053 ************************************ 00:06:43.053 18:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:43.053 * Looking for test storage... 00:06:43.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.053 --rc genhtml_branch_coverage=1 00:06:43.053 --rc genhtml_function_coverage=1 00:06:43.053 --rc genhtml_legend=1 00:06:43.053 --rc geninfo_all_blocks=1 00:06:43.053 --rc geninfo_unexecuted_blocks=1 00:06:43.053 00:06:43.053 ' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.053 --rc genhtml_branch_coverage=1 00:06:43.053 --rc genhtml_function_coverage=1 00:06:43.053 --rc genhtml_legend=1 00:06:43.053 --rc geninfo_all_blocks=1 00:06:43.053 --rc geninfo_unexecuted_blocks=1 00:06:43.053 00:06:43.053 ' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.053 --rc genhtml_branch_coverage=1 00:06:43.053 --rc genhtml_function_coverage=1 00:06:43.053 --rc genhtml_legend=1 00:06:43.053 --rc geninfo_all_blocks=1 00:06:43.053 --rc geninfo_unexecuted_blocks=1 00:06:43.053 00:06:43.053 ' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.053 --rc genhtml_branch_coverage=1 00:06:43.053 --rc genhtml_function_coverage=1 00:06:43.053 --rc genhtml_legend=1 00:06:43.053 --rc geninfo_all_blocks=1 00:06:43.053 --rc geninfo_unexecuted_blocks=1 00:06:43.053 00:06:43.053 ' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.053 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.054 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.337 18:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.245 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:45.246 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:45.246 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:45.246 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:45.246 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:06:45.246 00:06:45.246 --- 10.0.0.2 ping statistics --- 00:06:45.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.246 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:06:45.246 00:06:45.246 --- 10.0.0.1 ping statistics --- 00:06:45.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.246 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:06:45.246 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2348527 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2348527 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2348527 ']' 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.247 18:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:45.505 [2024-11-28 18:08:57.435437] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:06:45.505 [2024-11-28 18:08:57.435605] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.505 [2024-11-28 18:08:57.591764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.764 [2024-11-28 18:08:57.736194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.764 [2024-11-28 18:08:57.736273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.764 [2024-11-28 18:08:57.736307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.764 [2024-11-28 18:08:57.736333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.764 [2024-11-28 18:08:57.736353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.764 [2024-11-28 18:08:57.739021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.764 [2024-11-28 18:08:57.739078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.764 [2024-11-28 18:08:57.739081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.697 [2024-11-28 18:08:58.740570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.697 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:47.264 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.264 [2024-11-28 18:08:59.318691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.264 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.522 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:48.089 Malloc0 00:06:48.089 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:48.347 Delay0 00:06:48.347 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.607 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:48.866 NULL1 00:06:48.866 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:49.125 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2349096 00:06:49.125 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:49.125 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:49.125 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.498 Read completed with error (sct=0, sc=11) 00:06:50.498 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.755 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:50.756 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:51.013 true 00:06:51.013 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:51.013 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.580 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.838 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:51.839 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:52.097 true 00:06:52.097 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:52.097 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.355 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.921 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:52.921 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:52.921 true 00:06:52.921 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:52.921 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.178 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.434 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:53.434 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:53.999 true 00:06:53.999 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:53.999 18:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.932 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.190 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:55.190 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:55.447 true 00:06:55.447 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:55.447 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.704 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.962 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:55.962 18:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:56.220 true 00:06:56.220 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:56.220 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.477 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.735 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:56.735 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:56.992 true 00:06:56.992 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:56.992 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.925 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.183 18:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:58.183 18:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:58.747 true 00:06:58.747 18:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:58.747 18:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.747 18:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.005 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:59.005 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:59.263 true 00:06:59.564 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:06:59.564 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.564 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.873 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:59.873 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:00.132 true 00:07:00.132 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:00.132 18:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.507 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.507 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:01.507 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:01.764 true 00:07:01.764 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:01.764 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.022 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.280 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:02.280 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:02.538 true 00:07:02.538 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:02.538 18:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.471 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.729 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:03.729 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:03.987 true 00:07:03.987 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:03.987 18:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.246 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.505 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:04.505 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:04.763 true 00:07:04.764 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:04.764 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.022 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.280 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:05.280 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:05.539 true 00:07:05.539 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:05.539 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.472 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.729 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:06.729 18:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:06.988 true 00:07:06.988 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:06.988 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.246 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.503 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:07.504 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:07.762 true 00:07:07.762 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:07.762 18:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.020 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.586 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:08.586 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:08.586 true 00:07:08.586 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:08.586 18:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.519 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.519 18:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.035 18:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:10.035 18:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:10.293 true 00:07:10.293 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:10.293 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.551 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.809 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:10.809 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:11.066 true 00:07:11.066 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:11.066 18:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.999 18:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.999 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:11.999 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:12.257 true 00:07:12.257 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:12.257 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.824 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.824 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:12.824 18:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:13.082 true 00:07:13.082 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:13.082 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.340 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.904 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:13.904 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:13.904 true 00:07:13.904 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:13.904 18:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.280 18:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.280 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:15.280 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:15.538 true 00:07:15.538 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:15.538 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.796 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.054 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:16.054 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:16.312 true 00:07:16.312 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:16.313 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.571 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.829 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:16.829 18:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:17.087 true 00:07:17.087 18:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:17.087 18:09:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.020 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.020 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.279 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.279 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:18.279 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:18.538 true 00:07:18.538 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:18.538 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.104 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.104 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:19.104 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:19.362 true 00:07:19.362 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:19.362 18:09:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.296 Initializing NVMe Controllers 00:07:20.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.296 Controller IO queue size 128, less than required. 00:07:20.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.296 Controller IO queue size 128, less than required. 00:07:20.296 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:20.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:20.296 Initialization complete. Launching workers. 00:07:20.296 ======================================================== 00:07:20.296 Latency(us) 00:07:20.296 Device Information : IOPS MiB/s Average min max 00:07:20.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 629.13 0.31 91870.25 3791.01 1015519.27 00:07:20.296 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6604.93 3.23 19378.75 4478.14 391677.83 00:07:20.296 ======================================================== 00:07:20.296 Total : 7234.07 3.53 25683.20 3791.01 1015519.27 00:07:20.296 00:07:20.296 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.554 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:20.554 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:20.813 true 00:07:20.813 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2349096 00:07:20.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2349096) - No such process 00:07:20.813 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2349096 00:07:20.813 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.071 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.328 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:21.328 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:21.328 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:21.328 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.328 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:21.894 null0 00:07:21.894 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:21.894 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:21.894 18:09:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:22.151 null1 00:07:22.151 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.151 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.151 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:22.409 null2 00:07:22.409 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.409 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.409 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:22.666 null3 00:07:22.666 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.667 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.667 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:22.924 null4 00:07:22.924 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:22.924 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:22.924 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:23.181 null5 00:07:23.181 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.181 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.181 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:23.464 null6 00:07:23.464 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.464 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.464 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:23.722 null7 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:23.722 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2353857 2353859 2353862 2353865 2353868 2353872 2353875 2353879 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.723 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.981 18:09:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.239 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.240 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.240 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.240 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.240 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.240 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.498 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.067 18:09:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.325 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.583 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.843 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.102 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.103 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.361 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.619 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.185 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.185 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.185 18:09:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.185 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.443 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.444 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.702 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.960 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.961 18:09:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.219 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.479 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.738 18:09:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.305 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.564 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:29.822 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.823 rmmod nvme_tcp 00:07:29.823 rmmod nvme_fabrics 00:07:29.823 rmmod nvme_keyring 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2348527 ']' 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2348527 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2348527 ']' 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2348527 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348527 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348527' 00:07:29.823 killing process with pid 2348527 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2348527 00:07:29.823 18:09:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2348527 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.199 18:09:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.100 00:07:33.100 real 0m50.104s 00:07:33.100 user 3m48.886s 00:07:33.100 sys 0m16.504s 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:33.100 ************************************ 00:07:33.100 END TEST nvmf_ns_hotplug_stress 00:07:33.100 ************************************ 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.100 ************************************ 00:07:33.100 START TEST nvmf_delete_subsystem 00:07:33.100 ************************************ 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:33.100 * Looking for test storage... 00:07:33.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.100 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.360 --rc genhtml_branch_coverage=1 00:07:33.360 --rc genhtml_function_coverage=1 00:07:33.360 --rc genhtml_legend=1 00:07:33.360 --rc geninfo_all_blocks=1 00:07:33.360 --rc geninfo_unexecuted_blocks=1 00:07:33.360 00:07:33.360 ' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.360 --rc genhtml_branch_coverage=1 00:07:33.360 --rc genhtml_function_coverage=1 00:07:33.360 --rc genhtml_legend=1 00:07:33.360 --rc geninfo_all_blocks=1 00:07:33.360 --rc geninfo_unexecuted_blocks=1 00:07:33.360 00:07:33.360 ' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.360 --rc genhtml_branch_coverage=1 00:07:33.360 --rc genhtml_function_coverage=1 00:07:33.360 --rc genhtml_legend=1 00:07:33.360 --rc geninfo_all_blocks=1 00:07:33.360 --rc geninfo_unexecuted_blocks=1 00:07:33.360 00:07:33.360 ' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.360 --rc genhtml_branch_coverage=1 00:07:33.360 --rc genhtml_function_coverage=1 00:07:33.360 --rc genhtml_legend=1 00:07:33.360 --rc geninfo_all_blocks=1 00:07:33.360 --rc geninfo_unexecuted_blocks=1 00:07:33.360 00:07:33.360 ' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:33.360 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.361 18:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.267 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:35.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:35.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:35.268 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:35.268 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.268 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:07:35.528 00:07:35.528 --- 10.0.0.2 ping statistics --- 00:07:35.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.528 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:07:35.528 00:07:35.528 --- 10.0.0.1 ping statistics --- 00:07:35.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.528 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.528 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2356807 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2356807 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2356807 ']' 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.529 18:09:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.529 [2024-11-28 18:09:47.595664] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:35.529 [2024-11-28 18:09:47.595828] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.787 [2024-11-28 18:09:47.751207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.046 [2024-11-28 18:09:47.888055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.046 [2024-11-28 18:09:47.888159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.046 [2024-11-28 18:09:47.888187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.046 [2024-11-28 18:09:47.888211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.046 [2024-11-28 18:09:47.888231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.046 [2024-11-28 18:09:47.890899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.046 [2024-11-28 18:09:47.890901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 [2024-11-28 18:09:48.573643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 [2024-11-28 18:09:48.591538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 NULL1 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 Delay0 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2356963 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:36.613 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:36.872 [2024-11-28 18:09:48.725814] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:38.772 18:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.772 18:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.772 18:09:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 [2024-11-28 18:09:50.826734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 Write completed with error (sct=0, sc=8) 00:07:38.772 Read completed with error (sct=0, sc=8) 00:07:38.772 starting I/O failed: -6 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 starting I/O failed: -6 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 starting I/O failed: -6 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 starting I/O failed: -6 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 [2024-11-28 18:09:50.827939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016600 is same with the state(6) to be set 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Read completed with error (sct=0, sc=8) 00:07:38.773 Write completed with error (sct=0, sc=8) 00:07:38.773 [2024-11-28 18:09:50.828950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:39.706 [2024-11-28 18:09:51.785558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015c00 is same with the state(6) to be set 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 [2024-11-28 18:09:51.829563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016880 is same with the state(6) to be set 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Write completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 [2024-11-28 18:09:51.830318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(6) to be set 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.965 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 [2024-11-28 18:09:51.831414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Write completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 Read completed with error (sct=0, sc=8) 00:07:39.966 [2024-11-28 18:09:51.831997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:07:39.966 Initializing NVMe Controllers 00:07:39.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.966 Controller IO queue size 128, less than required. 00:07:39.966 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:39.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:39.966 Initialization complete. Launching workers. 00:07:39.966 ======================================================== 00:07:39.966 Latency(us) 00:07:39.966 Device Information : IOPS MiB/s Average min max 00:07:39.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 175.75 0.09 886125.44 1025.23 1016360.18 00:07:39.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.95 0.08 928954.83 2018.82 1016248.21 00:07:39.966 ======================================================== 00:07:39.966 Total : 331.70 0.16 906261.65 1025.23 1016360.18 00:07:39.966 00:07:39.966 [2024-11-28 18:09:51.836721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015c00 (9): Bad file descriptor 00:07:39.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:39.966 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.966 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:39.966 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2356963 00:07:39.966 18:09:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2356963 00:07:40.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2356963) - No such process 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2356963 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2356963 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2356963 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.532 [2024-11-28 18:09:52.354608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.532 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2357368 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:40.533 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.533 [2024-11-28 18:09:52.479508] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:40.791 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.791 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:40.791 18:09:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.357 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.357 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:41.357 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.922 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.922 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:41.922 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.488 18:09:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.488 18:09:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:42.488 18:09:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.069 18:09:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.069 18:09:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:43.069 18:09:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.387 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.387 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:43.387 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.686 Initializing NVMe Controllers 00:07:43.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:43.686 Controller IO queue size 128, less than required. 00:07:43.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:43.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:43.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:43.686 Initialization complete. Launching workers. 00:07:43.686 ======================================================== 00:07:43.686 Latency(us) 00:07:43.686 Device Information : IOPS MiB/s Average min max 00:07:43.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005087.57 1000257.04 1014675.29 00:07:43.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005848.15 1000233.80 1016066.05 00:07:43.686 ======================================================== 00:07:43.686 Total : 256.00 0.12 1005467.86 1000233.80 1016066.05 00:07:43.686 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2357368 00:07:43.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2357368) - No such process 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2357368 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.946 rmmod nvme_tcp 00:07:43.946 rmmod nvme_fabrics 00:07:43.946 rmmod nvme_keyring 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2356807 ']' 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2356807 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2356807 ']' 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2356807 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.946 18:09:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356807 00:07:43.946 18:09:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.946 18:09:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.946 18:09:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356807' 00:07:43.946 killing process with pid 2356807 00:07:43.946 18:09:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2356807 00:07:43.946 18:09:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2356807 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.324 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:45.325 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.325 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.325 18:09:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.230 00:07:47.230 real 0m14.101s 00:07:47.230 user 0m30.696s 00:07:47.230 sys 0m3.293s 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.230 ************************************ 00:07:47.230 END TEST nvmf_delete_subsystem 00:07:47.230 ************************************ 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.230 ************************************ 00:07:47.230 START TEST nvmf_host_management 00:07:47.230 ************************************ 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:47.230 * Looking for test storage... 00:07:47.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.230 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.231 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.490 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.490 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.490 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.490 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.490 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.491 --rc genhtml_branch_coverage=1 00:07:47.491 --rc genhtml_function_coverage=1 00:07:47.491 --rc genhtml_legend=1 00:07:47.491 --rc geninfo_all_blocks=1 00:07:47.491 --rc geninfo_unexecuted_blocks=1 00:07:47.491 00:07:47.491 ' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.491 --rc genhtml_branch_coverage=1 00:07:47.491 --rc genhtml_function_coverage=1 00:07:47.491 --rc genhtml_legend=1 00:07:47.491 --rc geninfo_all_blocks=1 00:07:47.491 --rc geninfo_unexecuted_blocks=1 00:07:47.491 00:07:47.491 ' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.491 --rc genhtml_branch_coverage=1 00:07:47.491 --rc genhtml_function_coverage=1 00:07:47.491 --rc genhtml_legend=1 00:07:47.491 --rc geninfo_all_blocks=1 00:07:47.491 --rc geninfo_unexecuted_blocks=1 00:07:47.491 00:07:47.491 ' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.491 --rc genhtml_branch_coverage=1 00:07:47.491 --rc genhtml_function_coverage=1 00:07:47.491 --rc genhtml_legend=1 00:07:47.491 --rc geninfo_all_blocks=1 00:07:47.491 --rc geninfo_unexecuted_blocks=1 00:07:47.491 00:07:47.491 ' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.491 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.492 18:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:49.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:49.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:49.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:49.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.390 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.391 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.648 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:07:49.649 00:07:49.649 --- 10.0.0.2 ping statistics --- 00:07:49.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.649 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:07:49.649 00:07:49.649 --- 10.0.0.1 ping statistics --- 00:07:49.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.649 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2359986 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2359986 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2359986 ']' 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.649 18:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:49.649 [2024-11-28 18:10:01.742451] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:49.649 [2024-11-28 18:10:01.742581] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.953 [2024-11-28 18:10:01.918319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.211 [2024-11-28 18:10:02.062622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.211 [2024-11-28 18:10:02.062685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.211 [2024-11-28 18:10:02.062705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.211 [2024-11-28 18:10:02.062723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.211 [2024-11-28 18:10:02.062739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.211 [2024-11-28 18:10:02.065161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.211 [2024-11-28 18:10:02.065225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.211 [2024-11-28 18:10:02.065271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.211 [2024-11-28 18:10:02.065277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 [2024-11-28 18:10:02.813255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.776 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.034 Malloc0 00:07:51.034 [2024-11-28 18:10:02.943133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2360161 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2360161 /var/tmp/bdevperf.sock 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2360161 ']' 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.034 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.034 { 00:07:51.034 "params": { 00:07:51.034 "name": "Nvme$subsystem", 00:07:51.034 "trtype": "$TEST_TRANSPORT", 00:07:51.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.034 "adrfam": "ipv4", 00:07:51.034 "trsvcid": "$NVMF_PORT", 00:07:51.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.034 "hdgst": ${hdgst:-false}, 00:07:51.034 "ddgst": ${ddgst:-false} 00:07:51.034 }, 00:07:51.034 "method": "bdev_nvme_attach_controller" 00:07:51.034 } 00:07:51.035 EOF 00:07:51.035 )") 00:07:51.035 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:51.035 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:51.035 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:51.035 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.035 "params": { 00:07:51.035 "name": "Nvme0", 00:07:51.035 "trtype": "tcp", 00:07:51.035 "traddr": "10.0.0.2", 00:07:51.035 "adrfam": "ipv4", 00:07:51.035 "trsvcid": "4420", 00:07:51.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:51.035 "hdgst": false, 00:07:51.035 "ddgst": false 00:07:51.035 }, 00:07:51.035 "method": "bdev_nvme_attach_controller" 00:07:51.035 }' 00:07:51.035 [2024-11-28 18:10:03.063265] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:51.035 [2024-11-28 18:10:03.063432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360161 ] 00:07:51.292 [2024-11-28 18:10:03.200908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.292 [2024-11-28 18:10:03.328822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.857 Running I/O for 10 seconds... 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.116 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.116 [2024-11-28 18:10:04.087361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.087965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.087988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.116 [2024-11-28 18:10:04.088864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.116 [2024-11-28 18:10:04.088887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.088909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.088932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.088953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.088977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.088998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.089968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.089989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.090685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.117 [2024-11-28 18:10:04.090707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.091182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.117 [2024-11-28 18:10:04.091213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.091239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.117 [2024-11-28 18:10:04.091271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.091293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.117 [2024-11-28 18:10:04.091313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.091335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:52.117 [2024-11-28 18:10:04.091356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:52.117 [2024-11-28 18:10:04.091376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:07:52.117 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.117 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:52.117 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.117 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.117 [2024-11-28 18:10:04.092581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:52.117 task offset: 41344 on job bdev=Nvme0n1 fails 00:07:52.117 00:07:52.117 Latency(us) 00:07:52.117 [2024-11-28T17:10:04.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.117 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:52.117 Job: Nvme0n1 ended in about 0.27 seconds with error 00:07:52.117 Verification LBA range: start 0x0 length 0x400 00:07:52.117 Nvme0n1 : 0.27 1165.84 72.87 233.17 0.00 43807.89 4708.88 45049.93 00:07:52.117 [2024-11-28T17:10:04.217Z] =================================================================================================================== 00:07:52.117 [2024-11-28T17:10:04.217Z] Total : 1165.84 72.87 233.17 0.00 43807.89 4708.88 45049.93 00:07:52.117 [2024-11-28 18:10:04.097611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.117 [2024-11-28 18:10:04.097662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:07:52.117 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.117 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:52.117 [2024-11-28 18:10:04.111018] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:53.052 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2360161 00:07:53.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2360161) - No such process 00:07:53.052 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:53.052 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:53.052 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:53.053 { 00:07:53.053 "params": { 00:07:53.053 "name": "Nvme$subsystem", 00:07:53.053 "trtype": "$TEST_TRANSPORT", 00:07:53.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.053 "adrfam": "ipv4", 00:07:53.053 "trsvcid": "$NVMF_PORT", 00:07:53.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.053 "hdgst": ${hdgst:-false}, 00:07:53.053 "ddgst": ${ddgst:-false} 00:07:53.053 }, 00:07:53.053 "method": "bdev_nvme_attach_controller" 00:07:53.053 } 00:07:53.053 EOF 00:07:53.053 )") 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:53.053 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:53.053 "params": { 00:07:53.053 "name": "Nvme0", 00:07:53.053 "trtype": "tcp", 00:07:53.053 "traddr": "10.0.0.2", 00:07:53.053 "adrfam": "ipv4", 00:07:53.053 "trsvcid": "4420", 00:07:53.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.053 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.053 "hdgst": false, 00:07:53.053 "ddgst": false 00:07:53.053 }, 00:07:53.053 "method": "bdev_nvme_attach_controller" 00:07:53.053 }' 00:07:53.311 [2024-11-28 18:10:05.186379] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:07:53.311 [2024-11-28 18:10:05.186538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2360441 ] 00:07:53.311 [2024-11-28 18:10:05.322974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.570 [2024-11-28 18:10:05.453900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.137 Running I/O for 1 seconds... 00:07:55.072 1344.00 IOPS, 84.00 MiB/s 00:07:55.072 Latency(us) 00:07:55.072 [2024-11-28T17:10:07.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.072 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.072 Verification LBA range: start 0x0 length 0x400 00:07:55.072 Nvme0n1 : 1.04 1352.68 84.54 0.00 0.00 46514.61 8786.68 40972.14 00:07:55.072 [2024-11-28T17:10:07.172Z] =================================================================================================================== 00:07:55.072 [2024-11-28T17:10:07.172Z] Total : 1352.68 84.54 0.00 0.00 46514.61 8786.68 40972.14 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.008 rmmod nvme_tcp 00:07:56.008 rmmod nvme_fabrics 00:07:56.008 rmmod nvme_keyring 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2359986 ']' 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2359986 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2359986 ']' 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2359986 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359986 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359986' 00:07:56.008 killing process with pid 2359986 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2359986 00:07:56.008 18:10:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2359986 00:07:57.385 [2024-11-28 18:10:09.133577] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.385 18:10:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:59.286 00:07:59.286 real 0m12.006s 00:07:59.286 user 0m32.550s 00:07:59.286 sys 0m3.173s 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.286 ************************************ 00:07:59.286 END TEST nvmf_host_management 00:07:59.286 ************************************ 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:59.286 ************************************ 00:07:59.286 START TEST nvmf_lvol 00:07:59.286 ************************************ 00:07:59.286 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.286 * Looking for test storage... 00:07:59.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.544 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.545 --rc genhtml_branch_coverage=1 00:07:59.545 --rc genhtml_function_coverage=1 00:07:59.545 --rc genhtml_legend=1 00:07:59.545 --rc geninfo_all_blocks=1 00:07:59.545 --rc geninfo_unexecuted_blocks=1 00:07:59.545 00:07:59.545 ' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.545 --rc genhtml_branch_coverage=1 00:07:59.545 --rc genhtml_function_coverage=1 00:07:59.545 --rc genhtml_legend=1 00:07:59.545 --rc geninfo_all_blocks=1 00:07:59.545 --rc geninfo_unexecuted_blocks=1 00:07:59.545 00:07:59.545 ' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.545 --rc genhtml_branch_coverage=1 00:07:59.545 --rc genhtml_function_coverage=1 00:07:59.545 --rc genhtml_legend=1 00:07:59.545 --rc geninfo_all_blocks=1 00:07:59.545 --rc geninfo_unexecuted_blocks=1 00:07:59.545 00:07:59.545 ' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:59.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.545 --rc genhtml_branch_coverage=1 00:07:59.545 --rc genhtml_function_coverage=1 00:07:59.545 --rc genhtml_legend=1 00:07:59.545 --rc geninfo_all_blocks=1 00:07:59.545 --rc geninfo_unexecuted_blocks=1 00:07:59.545 00:07:59.545 ' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:59.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:59.545 18:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.075 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:02.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:02.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:02.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:02.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:08:02.076 00:08:02.076 --- 10.0.0.2 ping statistics --- 00:08:02.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.076 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:08:02.076 00:08:02.076 --- 10.0.0.1 ping statistics --- 00:08:02.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.076 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2362800 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2362800 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2362800 ']' 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.076 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.076 [2024-11-28 18:10:13.852079] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:02.076 [2024-11-28 18:10:13.852248] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.076 [2024-11-28 18:10:14.001798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.076 [2024-11-28 18:10:14.137131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.076 [2024-11-28 18:10:14.137210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.076 [2024-11-28 18:10:14.137236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.076 [2024-11-28 18:10:14.137259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.076 [2024-11-28 18:10:14.137279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.076 [2024-11-28 18:10:14.139882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.076 [2024-11-28 18:10:14.139958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.076 [2024-11-28 18:10:14.139962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.011 18:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.011 [2024-11-28 18:10:15.091466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.271 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.529 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:03.529 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.786 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:03.786 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:04.044 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:04.302 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=aceb372b-7c0c-4585-855b-fcc9e0035680 00:08:04.302 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aceb372b-7c0c-4585-855b-fcc9e0035680 lvol 20 00:08:04.868 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b418ab0a-f4e0-42d9-8065-1a8a019f6b00 00:08:04.868 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.868 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b418ab0a-f4e0-42d9-8065-1a8a019f6b00 00:08:05.125 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.383 [2024-11-28 18:10:17.460129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.383 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.949 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2363358 00:08:05.949 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.949 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:06.885 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b418ab0a-f4e0-42d9-8065-1a8a019f6b00 MY_SNAPSHOT 00:08:07.143 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f53f2307-8c59-498c-a3f8-aaeac01fd959 00:08:07.143 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b418ab0a-f4e0-42d9-8065-1a8a019f6b00 30 00:08:07.708 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f53f2307-8c59-498c-a3f8-aaeac01fd959 MY_CLONE 00:08:07.966 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b9d71523-ecbf-4d6e-ba4a-dab1fe58fdc9 00:08:07.966 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b9d71523-ecbf-4d6e-ba4a-dab1fe58fdc9 00:08:08.900 18:10:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2363358 00:08:17.099 Initializing NVMe Controllers 00:08:17.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:17.099 Controller IO queue size 128, less than required. 00:08:17.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:17.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:17.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:17.099 Initialization complete. Launching workers. 00:08:17.099 ======================================================== 00:08:17.099 Latency(us) 00:08:17.099 Device Information : IOPS MiB/s Average min max 00:08:17.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8058.20 31.48 15901.83 349.99 138885.75 00:08:17.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7910.42 30.90 16187.65 3412.95 153172.37 00:08:17.099 ======================================================== 00:08:17.099 Total : 15968.62 62.38 16043.42 349.99 153172.37 00:08:17.099 00:08:17.099 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.099 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b418ab0a-f4e0-42d9-8065-1a8a019f6b00 00:08:17.099 18:10:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aceb372b-7c0c-4585-855b-fcc9e0035680 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.099 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.099 rmmod nvme_tcp 00:08:17.099 rmmod nvme_fabrics 00:08:17.099 rmmod nvme_keyring 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2362800 ']' 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2362800 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2362800 ']' 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2362800 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2362800 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2362800' 00:08:17.359 killing process with pid 2362800 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2362800 00:08:17.359 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2362800 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.736 18:10:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.641 00:08:20.641 real 0m21.355s 00:08:20.641 user 1m11.233s 00:08:20.641 sys 0m5.583s 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:20.641 ************************************ 00:08:20.641 END TEST nvmf_lvol 00:08:20.641 ************************************ 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.641 ************************************ 00:08:20.641 START TEST nvmf_lvs_grow 00:08:20.641 ************************************ 00:08:20.641 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:20.900 * Looking for test storage... 00:08:20.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.900 --rc genhtml_branch_coverage=1 00:08:20.900 --rc genhtml_function_coverage=1 00:08:20.900 --rc genhtml_legend=1 00:08:20.900 --rc geninfo_all_blocks=1 00:08:20.900 --rc geninfo_unexecuted_blocks=1 00:08:20.900 00:08:20.900 ' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.900 --rc genhtml_branch_coverage=1 00:08:20.900 --rc genhtml_function_coverage=1 00:08:20.900 --rc genhtml_legend=1 00:08:20.900 --rc geninfo_all_blocks=1 00:08:20.900 --rc geninfo_unexecuted_blocks=1 00:08:20.900 00:08:20.900 ' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.900 --rc genhtml_branch_coverage=1 00:08:20.900 --rc genhtml_function_coverage=1 00:08:20.900 --rc genhtml_legend=1 00:08:20.900 --rc geninfo_all_blocks=1 00:08:20.900 --rc geninfo_unexecuted_blocks=1 00:08:20.900 00:08:20.900 ' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.900 --rc genhtml_branch_coverage=1 00:08:20.900 --rc genhtml_function_coverage=1 00:08:20.900 --rc genhtml_legend=1 00:08:20.900 --rc geninfo_all_blocks=1 00:08:20.900 --rc geninfo_unexecuted_blocks=1 00:08:20.900 00:08:20.900 ' 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.900 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.901 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.432 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.433 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.433 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:08:23.433 00:08:23.433 --- 10.0.0.2 ping statistics --- 00:08:23.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.433 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:08:23.433 00:08:23.433 --- 10.0.0.1 ping statistics --- 00:08:23.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.433 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2366774 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2366774 00:08:23.433 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2366774 ']' 00:08:23.434 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.434 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.434 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.434 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.434 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.434 [2024-11-28 18:10:35.339484] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:23.434 [2024-11-28 18:10:35.339617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.434 [2024-11-28 18:10:35.487713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.692 [2024-11-28 18:10:35.618577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.692 [2024-11-28 18:10:35.618671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.692 [2024-11-28 18:10:35.618697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.692 [2024-11-28 18:10:35.618721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.692 [2024-11-28 18:10:35.618740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.692 [2024-11-28 18:10:35.620391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.259 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.826 [2024-11-28 18:10:36.625642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.826 ************************************ 00:08:24.826 START TEST lvs_grow_clean 00:08:24.826 ************************************ 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.826 18:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.084 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.084 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.342 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b549a12e-6755-44a7-b40b-2e80bd60face 00:08:25.342 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:25.342 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.600 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.600 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.600 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b549a12e-6755-44a7-b40b-2e80bd60face lvol 150 00:08:25.858 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=788ddf5a-bc48-47df-b562-3a23d3c6c69c 00:08:25.858 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.858 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.115 [2024-11-28 18:10:38.111034] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.115 [2024-11-28 18:10:38.111190] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.115 true 00:08:26.115 18:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:26.115 18:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.374 18:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.374 18:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.632 18:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 788ddf5a-bc48-47df-b562-3a23d3c6c69c 00:08:27.198 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:27.456 [2024-11-28 18:10:39.310911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.456 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2367348 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2367348 /var/tmp/bdevperf.sock 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2367348 ']' 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.714 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:27.714 [2024-11-28 18:10:39.678203] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:27.714 [2024-11-28 18:10:39.678330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2367348 ] 00:08:27.973 [2024-11-28 18:10:39.823063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.973 [2024-11-28 18:10:39.960117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.908 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.908 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:28.908 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:28.908 Nvme0n1 00:08:29.166 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.424 [ 00:08:29.424 { 00:08:29.424 "name": "Nvme0n1", 00:08:29.424 "aliases": [ 00:08:29.424 "788ddf5a-bc48-47df-b562-3a23d3c6c69c" 00:08:29.424 ], 00:08:29.424 "product_name": "NVMe disk", 00:08:29.424 "block_size": 4096, 00:08:29.424 "num_blocks": 38912, 00:08:29.424 "uuid": "788ddf5a-bc48-47df-b562-3a23d3c6c69c", 00:08:29.424 "numa_id": 0, 00:08:29.424 "assigned_rate_limits": { 00:08:29.424 "rw_ios_per_sec": 0, 00:08:29.424 "rw_mbytes_per_sec": 0, 00:08:29.424 "r_mbytes_per_sec": 0, 00:08:29.425 "w_mbytes_per_sec": 0 00:08:29.425 }, 00:08:29.425 "claimed": false, 00:08:29.425 "zoned": false, 00:08:29.425 "supported_io_types": { 00:08:29.425 "read": true, 00:08:29.425 "write": true, 00:08:29.425 "unmap": true, 00:08:29.425 "flush": true, 00:08:29.425 "reset": true, 00:08:29.425 "nvme_admin": true, 00:08:29.425 "nvme_io": true, 00:08:29.425 "nvme_io_md": false, 00:08:29.425 "write_zeroes": true, 00:08:29.425 "zcopy": false, 00:08:29.425 "get_zone_info": false, 00:08:29.425 "zone_management": false, 00:08:29.425 "zone_append": false, 00:08:29.425 "compare": true, 00:08:29.425 "compare_and_write": true, 00:08:29.425 "abort": true, 00:08:29.425 "seek_hole": false, 00:08:29.425 "seek_data": false, 00:08:29.425 "copy": true, 00:08:29.425 "nvme_iov_md": false 00:08:29.425 }, 00:08:29.425 "memory_domains": [ 00:08:29.425 { 00:08:29.425 "dma_device_id": "system", 00:08:29.425 "dma_device_type": 1 00:08:29.425 } 00:08:29.425 ], 00:08:29.425 "driver_specific": { 00:08:29.425 "nvme": [ 00:08:29.425 { 00:08:29.425 "trid": { 00:08:29.425 "trtype": "TCP", 00:08:29.425 "adrfam": "IPv4", 00:08:29.425 "traddr": "10.0.0.2", 00:08:29.425 "trsvcid": "4420", 00:08:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.425 }, 00:08:29.425 "ctrlr_data": { 00:08:29.425 "cntlid": 1, 00:08:29.425 "vendor_id": "0x8086", 00:08:29.425 "model_number": "SPDK bdev Controller", 00:08:29.425 "serial_number": "SPDK0", 00:08:29.425 "firmware_revision": "25.01", 00:08:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.425 "oacs": { 00:08:29.425 "security": 0, 00:08:29.425 "format": 0, 00:08:29.425 "firmware": 0, 00:08:29.425 "ns_manage": 0 00:08:29.425 }, 00:08:29.425 "multi_ctrlr": true, 00:08:29.425 "ana_reporting": false 00:08:29.425 }, 00:08:29.425 "vs": { 00:08:29.425 "nvme_version": "1.3" 00:08:29.425 }, 00:08:29.425 "ns_data": { 00:08:29.425 "id": 1, 00:08:29.425 "can_share": true 00:08:29.425 } 00:08:29.425 } 00:08:29.425 ], 00:08:29.425 "mp_policy": "active_passive" 00:08:29.425 } 00:08:29.425 } 00:08:29.425 ] 00:08:29.425 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2367607 00:08:29.425 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.425 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.425 Running I/O for 10 seconds... 00:08:30.360 Latency(us) 00:08:30.360 [2024-11-28T17:10:42.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.360 Nvme0n1 : 1.00 10607.00 41.43 0.00 0.00 0.00 0.00 0.00 00:08:30.360 [2024-11-28T17:10:42.460Z] =================================================================================================================== 00:08:30.360 [2024-11-28T17:10:42.460Z] Total : 10607.00 41.43 0.00 0.00 0.00 0.00 0.00 00:08:30.360 00:08:31.297 18:10:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:31.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.556 Nvme0n1 : 2.00 10733.50 41.93 0.00 0.00 0.00 0.00 0.00 00:08:31.556 [2024-11-28T17:10:43.656Z] =================================================================================================================== 00:08:31.556 [2024-11-28T17:10:43.656Z] Total : 10733.50 41.93 0.00 0.00 0.00 0.00 0.00 00:08:31.556 00:08:31.556 true 00:08:31.556 18:10:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:31.556 18:10:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.123 18:10:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.123 18:10:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.123 18:10:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2367607 00:08:32.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.382 Nvme0n1 : 3.00 10755.00 42.01 0.00 0.00 0.00 0.00 0.00 00:08:32.382 [2024-11-28T17:10:44.482Z] =================================================================================================================== 00:08:32.382 [2024-11-28T17:10:44.482Z] Total : 10755.00 42.01 0.00 0.00 0.00 0.00 0.00 00:08:32.382 00:08:33.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.318 Nvme0n1 : 4.00 10796.75 42.17 0.00 0.00 0.00 0.00 0.00 00:08:33.318 [2024-11-28T17:10:45.418Z] =================================================================================================================== 00:08:33.318 [2024-11-28T17:10:45.418Z] Total : 10796.75 42.17 0.00 0.00 0.00 0.00 0.00 00:08:33.318 00:08:34.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.693 Nvme0n1 : 5.00 10848.20 42.38 0.00 0.00 0.00 0.00 0.00 00:08:34.693 [2024-11-28T17:10:46.793Z] =================================================================================================================== 00:08:34.693 [2024-11-28T17:10:46.793Z] Total : 10848.20 42.38 0.00 0.00 0.00 0.00 0.00 00:08:34.693 00:08:35.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.629 Nvme0n1 : 6.00 10871.50 42.47 0.00 0.00 0.00 0.00 0.00 00:08:35.629 [2024-11-28T17:10:47.729Z] =================================================================================================================== 00:08:35.629 [2024-11-28T17:10:47.729Z] Total : 10871.50 42.47 0.00 0.00 0.00 0.00 0.00 00:08:35.629 00:08:36.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.566 Nvme0n1 : 7.00 10906.14 42.60 0.00 0.00 0.00 0.00 0.00 00:08:36.566 [2024-11-28T17:10:48.666Z] =================================================================================================================== 00:08:36.566 [2024-11-28T17:10:48.666Z] Total : 10906.14 42.60 0.00 0.00 0.00 0.00 0.00 00:08:36.566 00:08:37.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.504 Nvme0n1 : 8.00 10916.25 42.64 0.00 0.00 0.00 0.00 0.00 00:08:37.504 [2024-11-28T17:10:49.604Z] =================================================================================================================== 00:08:37.504 [2024-11-28T17:10:49.604Z] Total : 10916.25 42.64 0.00 0.00 0.00 0.00 0.00 00:08:37.504 00:08:38.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.440 Nvme0n1 : 9.00 10924.78 42.67 0.00 0.00 0.00 0.00 0.00 00:08:38.440 [2024-11-28T17:10:50.540Z] =================================================================================================================== 00:08:38.440 [2024-11-28T17:10:50.540Z] Total : 10924.78 42.67 0.00 0.00 0.00 0.00 0.00 00:08:38.440 00:08:39.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.378 Nvme0n1 : 10.00 10937.80 42.73 0.00 0.00 0.00 0.00 0.00 00:08:39.378 [2024-11-28T17:10:51.478Z] =================================================================================================================== 00:08:39.378 [2024-11-28T17:10:51.478Z] Total : 10937.80 42.73 0.00 0.00 0.00 0.00 0.00 00:08:39.378 00:08:39.378 00:08:39.378 Latency(us) 00:08:39.378 [2024-11-28T17:10:51.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.378 Nvme0n1 : 10.01 10940.25 42.74 0.00 0.00 11693.04 2924.85 22816.24 00:08:39.378 [2024-11-28T17:10:51.478Z] =================================================================================================================== 00:08:39.378 [2024-11-28T17:10:51.478Z] Total : 10940.25 42.74 0.00 0.00 11693.04 2924.85 22816.24 00:08:39.378 { 00:08:39.378 "results": [ 00:08:39.378 { 00:08:39.378 "job": "Nvme0n1", 00:08:39.378 "core_mask": "0x2", 00:08:39.378 "workload": "randwrite", 00:08:39.378 "status": "finished", 00:08:39.378 "queue_depth": 128, 00:08:39.378 "io_size": 4096, 00:08:39.378 "runtime": 10.009459, 00:08:39.378 "iops": 10940.251615996429, 00:08:39.378 "mibps": 42.73535787498605, 00:08:39.378 "io_failed": 0, 00:08:39.378 "io_timeout": 0, 00:08:39.378 "avg_latency_us": 11693.043137754672, 00:08:39.378 "min_latency_us": 2924.8474074074074, 00:08:39.378 "max_latency_us": 22816.237037037037 00:08:39.378 } 00:08:39.378 ], 00:08:39.378 "core_count": 1 00:08:39.378 } 00:08:39.378 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2367348 00:08:39.378 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2367348 ']' 00:08:39.378 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2367348 00:08:39.378 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:39.378 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.378 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2367348 00:08:39.637 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.637 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.637 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2367348' 00:08:39.637 killing process with pid 2367348 00:08:39.637 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2367348 00:08:39.637 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.637 00:08:39.637 Latency(us) 00:08:39.637 [2024-11-28T17:10:51.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.637 [2024-11-28T17:10:51.737Z] =================================================================================================================== 00:08:39.637 [2024-11-28T17:10:51.737Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.637 18:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2367348 00:08:40.571 18:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.828 18:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:41.086 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:41.086 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:41.344 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:41.344 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:41.344 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.603 [2024-11-28 18:10:53.584356] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:41.603 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:41.861 request: 00:08:41.861 { 00:08:41.861 "uuid": "b549a12e-6755-44a7-b40b-2e80bd60face", 00:08:41.861 "method": "bdev_lvol_get_lvstores", 00:08:41.861 "req_id": 1 00:08:41.861 } 00:08:41.861 Got JSON-RPC error response 00:08:41.861 response: 00:08:41.861 { 00:08:41.861 "code": -19, 00:08:41.861 "message": "No such device" 00:08:41.861 } 00:08:41.861 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:41.861 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.861 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.861 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.861 18:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.118 aio_bdev 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 788ddf5a-bc48-47df-b562-3a23d3c6c69c 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=788ddf5a-bc48-47df-b562-3a23d3c6c69c 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.377 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.634 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 788ddf5a-bc48-47df-b562-3a23d3c6c69c -t 2000 00:08:42.892 [ 00:08:42.892 { 00:08:42.892 "name": "788ddf5a-bc48-47df-b562-3a23d3c6c69c", 00:08:42.892 "aliases": [ 00:08:42.892 "lvs/lvol" 00:08:42.892 ], 00:08:42.892 "product_name": "Logical Volume", 00:08:42.892 "block_size": 4096, 00:08:42.892 "num_blocks": 38912, 00:08:42.892 "uuid": "788ddf5a-bc48-47df-b562-3a23d3c6c69c", 00:08:42.892 "assigned_rate_limits": { 00:08:42.892 "rw_ios_per_sec": 0, 00:08:42.892 "rw_mbytes_per_sec": 0, 00:08:42.892 "r_mbytes_per_sec": 0, 00:08:42.892 "w_mbytes_per_sec": 0 00:08:42.892 }, 00:08:42.892 "claimed": false, 00:08:42.892 "zoned": false, 00:08:42.892 "supported_io_types": { 00:08:42.892 "read": true, 00:08:42.892 "write": true, 00:08:42.892 "unmap": true, 00:08:42.892 "flush": false, 00:08:42.892 "reset": true, 00:08:42.892 "nvme_admin": false, 00:08:42.892 "nvme_io": false, 00:08:42.892 "nvme_io_md": false, 00:08:42.892 "write_zeroes": true, 00:08:42.892 "zcopy": false, 00:08:42.892 "get_zone_info": false, 00:08:42.892 "zone_management": false, 00:08:42.892 "zone_append": false, 00:08:42.892 "compare": false, 00:08:42.892 "compare_and_write": false, 00:08:42.892 "abort": false, 00:08:42.892 "seek_hole": true, 00:08:42.892 "seek_data": true, 00:08:42.892 "copy": false, 00:08:42.892 "nvme_iov_md": false 00:08:42.892 }, 00:08:42.892 "driver_specific": { 00:08:42.892 "lvol": { 00:08:42.892 "lvol_store_uuid": "b549a12e-6755-44a7-b40b-2e80bd60face", 00:08:42.892 "base_bdev": "aio_bdev", 00:08:42.892 "thin_provision": false, 00:08:42.892 "num_allocated_clusters": 38, 00:08:42.892 "snapshot": false, 00:08:42.892 "clone": false, 00:08:42.892 "esnap_clone": false 00:08:42.892 } 00:08:42.892 } 00:08:42.892 } 00:08:42.892 ] 00:08:42.892 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:42.892 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:42.892 18:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:43.155 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:43.155 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:43.155 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:43.415 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:43.415 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 788ddf5a-bc48-47df-b562-3a23d3c6c69c 00:08:43.674 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b549a12e-6755-44a7-b40b-2e80bd60face 00:08:43.933 18:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.192 00:08:44.192 real 0m19.541s 00:08:44.192 user 0m19.308s 00:08:44.192 sys 0m1.944s 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:44.192 ************************************ 00:08:44.192 END TEST lvs_grow_clean 00:08:44.192 ************************************ 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:44.192 ************************************ 00:08:44.192 START TEST lvs_grow_dirty 00:08:44.192 ************************************ 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:44.192 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.759 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:44.759 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:45.018 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b67122a-ca03-4701-b484-8ed4702f41c2 00:08:45.018 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:08:45.018 18:10:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:45.277 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:45.277 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:45.277 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0b67122a-ca03-4701-b484-8ed4702f41c2 lvol 150 00:08:45.535 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=67e58db6-7d58-48f4-9ab5-4463864cd400 00:08:45.535 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.535 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:45.793 [2024-11-28 18:10:57.716020] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:45.793 [2024-11-28 18:10:57.716148] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:45.793 true 00:08:45.793 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:08:45.793 18:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:46.050 18:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:46.050 18:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.308 18:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67e58db6-7d58-48f4-9ab5-4463864cd400 00:08:46.566 18:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:46.824 [2024-11-28 18:10:58.855735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.824 18:10:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2369716 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2369716 /var/tmp/bdevperf.sock 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2369716 ']' 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.083 18:10:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:47.341 [2024-11-28 18:10:59.231901] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:47.341 [2024-11-28 18:10:59.232025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369716 ] 00:08:47.341 [2024-11-28 18:10:59.375222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.599 [2024-11-28 18:10:59.501622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.166 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.166 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:48.166 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:48.731 Nvme0n1 00:08:48.731 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:48.990 [ 00:08:48.990 { 00:08:48.990 "name": "Nvme0n1", 00:08:48.990 "aliases": [ 00:08:48.990 "67e58db6-7d58-48f4-9ab5-4463864cd400" 00:08:48.990 ], 00:08:48.990 "product_name": "NVMe disk", 00:08:48.990 "block_size": 4096, 00:08:48.990 "num_blocks": 38912, 00:08:48.990 "uuid": "67e58db6-7d58-48f4-9ab5-4463864cd400", 00:08:48.990 "numa_id": 0, 00:08:48.990 "assigned_rate_limits": { 00:08:48.990 "rw_ios_per_sec": 0, 00:08:48.990 "rw_mbytes_per_sec": 0, 00:08:48.990 "r_mbytes_per_sec": 0, 00:08:48.990 "w_mbytes_per_sec": 0 00:08:48.990 }, 00:08:48.990 "claimed": false, 00:08:48.990 "zoned": false, 00:08:48.990 "supported_io_types": { 00:08:48.990 "read": true, 00:08:48.990 "write": true, 00:08:48.990 "unmap": true, 00:08:48.990 "flush": true, 00:08:48.990 "reset": true, 00:08:48.990 "nvme_admin": true, 00:08:48.990 "nvme_io": true, 00:08:48.990 "nvme_io_md": false, 00:08:48.990 "write_zeroes": true, 00:08:48.990 "zcopy": false, 00:08:48.990 "get_zone_info": false, 00:08:48.990 "zone_management": false, 00:08:48.990 "zone_append": false, 00:08:48.990 "compare": true, 00:08:48.990 "compare_and_write": true, 00:08:48.990 "abort": true, 00:08:48.990 "seek_hole": false, 00:08:48.990 "seek_data": false, 00:08:48.990 "copy": true, 00:08:48.990 "nvme_iov_md": false 00:08:48.990 }, 00:08:48.990 "memory_domains": [ 00:08:48.990 { 00:08:48.990 "dma_device_id": "system", 00:08:48.990 "dma_device_type": 1 00:08:48.990 } 00:08:48.990 ], 00:08:48.990 "driver_specific": { 00:08:48.990 "nvme": [ 00:08:48.990 { 00:08:48.990 "trid": { 00:08:48.990 "trtype": "TCP", 00:08:48.990 "adrfam": "IPv4", 00:08:48.990 "traddr": "10.0.0.2", 00:08:48.990 "trsvcid": "4420", 00:08:48.990 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:48.990 }, 00:08:48.990 "ctrlr_data": { 00:08:48.990 "cntlid": 1, 00:08:48.990 "vendor_id": "0x8086", 00:08:48.990 "model_number": "SPDK bdev Controller", 00:08:48.990 "serial_number": "SPDK0", 00:08:48.990 "firmware_revision": "25.01", 00:08:48.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.990 "oacs": { 00:08:48.990 "security": 0, 00:08:48.990 "format": 0, 00:08:48.990 "firmware": 0, 00:08:48.990 "ns_manage": 0 00:08:48.990 }, 00:08:48.990 "multi_ctrlr": true, 00:08:48.990 "ana_reporting": false 00:08:48.990 }, 00:08:48.990 "vs": { 00:08:48.990 "nvme_version": "1.3" 00:08:48.990 }, 00:08:48.990 "ns_data": { 00:08:48.990 "id": 1, 00:08:48.990 "can_share": true 00:08:48.990 } 00:08:48.990 } 00:08:48.990 ], 00:08:48.990 "mp_policy": "active_passive" 00:08:48.990 } 00:08:48.990 } 00:08:48.990 ] 00:08:48.990 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2369950 00:08:48.990 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:48.990 18:11:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.990 Running I/O for 10 seconds... 00:08:50.038 Latency(us) 00:08:50.038 [2024-11-28T17:11:02.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:50.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.038 Nvme0n1 : 1.00 10455.00 40.84 0.00 0.00 0.00 0.00 0.00 00:08:50.038 [2024-11-28T17:11:02.138Z] =================================================================================================================== 00:08:50.038 [2024-11-28T17:11:02.138Z] Total : 10455.00 40.84 0.00 0.00 0.00 0.00 0.00 00:08:50.038 00:08:50.974 18:11:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:08:50.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.974 Nvme0n1 : 2.00 10688.50 41.75 0.00 0.00 0.00 0.00 0.00 00:08:50.974 [2024-11-28T17:11:03.074Z] =================================================================================================================== 00:08:50.974 [2024-11-28T17:11:03.074Z] Total : 10688.50 41.75 0.00 0.00 0.00 0.00 0.00 00:08:50.974 00:08:51.232 true 00:08:51.232 18:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:08:51.232 18:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:51.491 18:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:51.491 18:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:51.491 18:11:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2369950 00:08:52.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.086 Nvme0n1 : 3.00 10725.00 41.89 0.00 0.00 0.00 0.00 0.00 00:08:52.086 [2024-11-28T17:11:04.186Z] =================================================================================================================== 00:08:52.086 [2024-11-28T17:11:04.186Z] Total : 10725.00 41.89 0.00 0.00 0.00 0.00 0.00 00:08:52.086 00:08:53.022 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.022 Nvme0n1 : 4.00 10758.75 42.03 0.00 0.00 0.00 0.00 0.00 00:08:53.022 [2024-11-28T17:11:05.122Z] =================================================================================================================== 00:08:53.022 [2024-11-28T17:11:05.122Z] Total : 10758.75 42.03 0.00 0.00 0.00 0.00 0.00 00:08:53.022 00:08:53.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.956 Nvme0n1 : 5.00 10816.80 42.25 0.00 0.00 0.00 0.00 0.00 00:08:53.956 [2024-11-28T17:11:06.056Z] =================================================================================================================== 00:08:53.956 [2024-11-28T17:11:06.056Z] Total : 10816.80 42.25 0.00 0.00 0.00 0.00 0.00 00:08:53.956 00:08:55.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.332 Nvme0n1 : 6.00 10845.17 42.36 0.00 0.00 0.00 0.00 0.00 00:08:55.332 [2024-11-28T17:11:07.432Z] =================================================================================================================== 00:08:55.332 [2024-11-28T17:11:07.432Z] Total : 10845.17 42.36 0.00 0.00 0.00 0.00 0.00 00:08:55.333 00:08:56.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.269 Nvme0n1 : 7.00 10856.86 42.41 0.00 0.00 0.00 0.00 0.00 00:08:56.269 [2024-11-28T17:11:08.369Z] =================================================================================================================== 00:08:56.269 [2024-11-28T17:11:08.369Z] Total : 10856.86 42.41 0.00 0.00 0.00 0.00 0.00 00:08:56.269 00:08:57.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.205 Nvme0n1 : 8.00 10888.88 42.53 0.00 0.00 0.00 0.00 0.00 00:08:57.205 [2024-11-28T17:11:09.305Z] =================================================================================================================== 00:08:57.205 [2024-11-28T17:11:09.305Z] Total : 10888.88 42.53 0.00 0.00 0.00 0.00 0.00 00:08:57.205 00:08:58.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.141 Nvme0n1 : 9.00 10907.00 42.61 0.00 0.00 0.00 0.00 0.00 00:08:58.141 [2024-11-28T17:11:10.241Z] =================================================================================================================== 00:08:58.141 [2024-11-28T17:11:10.241Z] Total : 10907.00 42.61 0.00 0.00 0.00 0.00 0.00 00:08:58.141 00:08:59.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.076 Nvme0n1 : 10.00 10915.00 42.64 0.00 0.00 0.00 0.00 0.00 00:08:59.076 [2024-11-28T17:11:11.176Z] =================================================================================================================== 00:08:59.076 [2024-11-28T17:11:11.176Z] Total : 10915.00 42.64 0.00 0.00 0.00 0.00 0.00 00:08:59.076 00:08:59.076 00:08:59.076 Latency(us) 00:08:59.076 [2024-11-28T17:11:11.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.076 Nvme0n1 : 10.01 10919.27 42.65 0.00 0.00 11715.27 5145.79 23010.42 00:08:59.076 [2024-11-28T17:11:11.176Z] =================================================================================================================== 00:08:59.076 [2024-11-28T17:11:11.176Z] Total : 10919.27 42.65 0.00 0.00 11715.27 5145.79 23010.42 00:08:59.076 { 00:08:59.076 "results": [ 00:08:59.076 { 00:08:59.076 "job": "Nvme0n1", 00:08:59.076 "core_mask": "0x2", 00:08:59.076 "workload": "randwrite", 00:08:59.076 "status": "finished", 00:08:59.076 "queue_depth": 128, 00:08:59.076 "io_size": 4096, 00:08:59.076 "runtime": 10.007812, 00:08:59.076 "iops": 10919.269866380384, 00:08:59.076 "mibps": 42.653397915548375, 00:08:59.076 "io_failed": 0, 00:08:59.076 "io_timeout": 0, 00:08:59.076 "avg_latency_us": 11715.266945534087, 00:08:59.076 "min_latency_us": 5145.789629629629, 00:08:59.076 "max_latency_us": 23010.417777777777 00:08:59.076 } 00:08:59.076 ], 00:08:59.076 "core_count": 1 00:08:59.076 } 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2369716 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2369716 ']' 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2369716 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2369716 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2369716' 00:08:59.076 killing process with pid 2369716 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2369716 00:08:59.076 Received shutdown signal, test time was about 10.000000 seconds 00:08:59.076 00:08:59.076 Latency(us) 00:08:59.076 [2024-11-28T17:11:11.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.076 [2024-11-28T17:11:11.176Z] =================================================================================================================== 00:08:59.076 [2024-11-28T17:11:11.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:59.076 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2369716 00:09:00.011 18:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.269 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:00.527 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:00.527 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:00.786 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:00.786 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:00.786 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2366774 00:09:00.786 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2366774 00:09:01.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2366774 Killed "${NVMF_APP[@]}" "$@" 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2371420 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2371420 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2371420 ']' 00:09:01.044 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.045 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.045 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.045 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.045 18:11:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.045 [2024-11-28 18:11:13.009863] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:01.045 [2024-11-28 18:11:13.009987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.303 [2024-11-28 18:11:13.165191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.303 [2024-11-28 18:11:13.302384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.303 [2024-11-28 18:11:13.302477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.303 [2024-11-28 18:11:13.302503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.303 [2024-11-28 18:11:13.302527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.303 [2024-11-28 18:11:13.302547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.303 [2024-11-28 18:11:13.304216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.236 18:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.236 18:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:02.236 18:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.236 18:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.236 18:11:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.236 [2024-11-28 18:11:14.261917] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:02.236 [2024-11-28 18:11:14.262180] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:02.236 [2024-11-28 18:11:14.262267] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 67e58db6-7d58-48f4-9ab5-4463864cd400 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=67e58db6-7d58-48f4-9ab5-4463864cd400 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.236 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:02.237 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.237 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.237 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.495 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67e58db6-7d58-48f4-9ab5-4463864cd400 -t 2000 00:09:02.754 [ 00:09:02.754 { 00:09:02.754 "name": "67e58db6-7d58-48f4-9ab5-4463864cd400", 00:09:02.754 "aliases": [ 00:09:02.754 "lvs/lvol" 00:09:02.754 ], 00:09:02.754 "product_name": "Logical Volume", 00:09:02.754 "block_size": 4096, 00:09:02.754 "num_blocks": 38912, 00:09:02.754 "uuid": "67e58db6-7d58-48f4-9ab5-4463864cd400", 00:09:02.754 "assigned_rate_limits": { 00:09:02.754 "rw_ios_per_sec": 0, 00:09:02.754 "rw_mbytes_per_sec": 0, 00:09:02.754 "r_mbytes_per_sec": 0, 00:09:02.754 "w_mbytes_per_sec": 0 00:09:02.754 }, 00:09:02.754 "claimed": false, 00:09:02.754 "zoned": false, 00:09:02.754 "supported_io_types": { 00:09:02.754 "read": true, 00:09:02.754 "write": true, 00:09:02.754 "unmap": true, 00:09:02.754 "flush": false, 00:09:02.754 "reset": true, 00:09:02.754 "nvme_admin": false, 00:09:02.754 "nvme_io": false, 00:09:02.754 "nvme_io_md": false, 00:09:02.754 "write_zeroes": true, 00:09:02.754 "zcopy": false, 00:09:02.754 "get_zone_info": false, 00:09:02.754 "zone_management": false, 00:09:02.754 "zone_append": false, 00:09:02.754 "compare": false, 00:09:02.754 "compare_and_write": false, 00:09:02.754 "abort": false, 00:09:02.754 "seek_hole": true, 00:09:02.754 "seek_data": true, 00:09:02.754 "copy": false, 00:09:02.754 "nvme_iov_md": false 00:09:02.754 }, 00:09:02.754 "driver_specific": { 00:09:02.754 "lvol": { 00:09:02.754 "lvol_store_uuid": "0b67122a-ca03-4701-b484-8ed4702f41c2", 00:09:02.754 "base_bdev": "aio_bdev", 00:09:02.754 "thin_provision": false, 00:09:02.754 "num_allocated_clusters": 38, 00:09:02.754 "snapshot": false, 00:09:02.754 "clone": false, 00:09:02.754 "esnap_clone": false 00:09:02.754 } 00:09:02.754 } 00:09:02.754 } 00:09:02.754 ] 00:09:02.754 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:02.754 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:02.754 18:11:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:03.319 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:03.319 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:03.319 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:03.319 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:03.319 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.576 [2024-11-28 18:11:15.674911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:03.834 18:11:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:04.092 request: 00:09:04.092 { 00:09:04.092 "uuid": "0b67122a-ca03-4701-b484-8ed4702f41c2", 00:09:04.092 "method": "bdev_lvol_get_lvstores", 00:09:04.092 "req_id": 1 00:09:04.092 } 00:09:04.092 Got JSON-RPC error response 00:09:04.092 response: 00:09:04.092 { 00:09:04.092 "code": -19, 00:09:04.092 "message": "No such device" 00:09:04.092 } 00:09:04.092 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:04.092 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.092 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.092 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.092 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.349 aio_bdev 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67e58db6-7d58-48f4-9ab5-4463864cd400 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=67e58db6-7d58-48f4-9ab5-4463864cd400 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.349 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.607 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67e58db6-7d58-48f4-9ab5-4463864cd400 -t 2000 00:09:04.865 [ 00:09:04.865 { 00:09:04.865 "name": "67e58db6-7d58-48f4-9ab5-4463864cd400", 00:09:04.865 "aliases": [ 00:09:04.865 "lvs/lvol" 00:09:04.865 ], 00:09:04.865 "product_name": "Logical Volume", 00:09:04.865 "block_size": 4096, 00:09:04.865 "num_blocks": 38912, 00:09:04.865 "uuid": "67e58db6-7d58-48f4-9ab5-4463864cd400", 00:09:04.865 "assigned_rate_limits": { 00:09:04.865 "rw_ios_per_sec": 0, 00:09:04.865 "rw_mbytes_per_sec": 0, 00:09:04.865 "r_mbytes_per_sec": 0, 00:09:04.865 "w_mbytes_per_sec": 0 00:09:04.865 }, 00:09:04.865 "claimed": false, 00:09:04.865 "zoned": false, 00:09:04.865 "supported_io_types": { 00:09:04.865 "read": true, 00:09:04.865 "write": true, 00:09:04.865 "unmap": true, 00:09:04.865 "flush": false, 00:09:04.865 "reset": true, 00:09:04.865 "nvme_admin": false, 00:09:04.865 "nvme_io": false, 00:09:04.865 "nvme_io_md": false, 00:09:04.865 "write_zeroes": true, 00:09:04.865 "zcopy": false, 00:09:04.865 "get_zone_info": false, 00:09:04.865 "zone_management": false, 00:09:04.865 "zone_append": false, 00:09:04.865 "compare": false, 00:09:04.865 "compare_and_write": false, 00:09:04.865 "abort": false, 00:09:04.865 "seek_hole": true, 00:09:04.865 "seek_data": true, 00:09:04.865 "copy": false, 00:09:04.865 "nvme_iov_md": false 00:09:04.865 }, 00:09:04.865 "driver_specific": { 00:09:04.865 "lvol": { 00:09:04.865 "lvol_store_uuid": "0b67122a-ca03-4701-b484-8ed4702f41c2", 00:09:04.865 "base_bdev": "aio_bdev", 00:09:04.865 "thin_provision": false, 00:09:04.865 "num_allocated_clusters": 38, 00:09:04.865 "snapshot": false, 00:09:04.865 "clone": false, 00:09:04.865 "esnap_clone": false 00:09:04.865 } 00:09:04.865 } 00:09:04.865 } 00:09:04.865 ] 00:09:04.865 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:04.865 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:04.865 18:11:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:05.124 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:05.124 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:05.124 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:05.382 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:05.382 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67e58db6-7d58-48f4-9ab5-4463864cd400 00:09:05.640 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b67122a-ca03-4701-b484-8ed4702f41c2 00:09:05.898 18:11:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.157 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.157 00:09:06.157 real 0m21.976s 00:09:06.157 user 0m55.924s 00:09:06.157 sys 0m4.511s 00:09:06.157 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.157 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.157 ************************************ 00:09:06.157 END TEST lvs_grow_dirty 00:09:06.157 ************************************ 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:06.416 nvmf_trace.0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.416 rmmod nvme_tcp 00:09:06.416 rmmod nvme_fabrics 00:09:06.416 rmmod nvme_keyring 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2371420 ']' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2371420 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2371420 ']' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2371420 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2371420 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2371420' 00:09:06.416 killing process with pid 2371420 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2371420 00:09:06.416 18:11:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2371420 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.793 18:11:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.701 00:09:09.701 real 0m48.869s 00:09:09.701 user 1m23.209s 00:09:09.701 sys 0m8.741s 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 ************************************ 00:09:09.701 END TEST nvmf_lvs_grow 00:09:09.701 ************************************ 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.701 ************************************ 00:09:09.701 START TEST nvmf_bdev_io_wait 00:09:09.701 ************************************ 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.701 * Looking for test storage... 00:09:09.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.701 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.962 --rc genhtml_branch_coverage=1 00:09:09.962 --rc genhtml_function_coverage=1 00:09:09.962 --rc genhtml_legend=1 00:09:09.962 --rc geninfo_all_blocks=1 00:09:09.962 --rc geninfo_unexecuted_blocks=1 00:09:09.962 00:09:09.962 ' 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.962 --rc genhtml_branch_coverage=1 00:09:09.962 --rc genhtml_function_coverage=1 00:09:09.962 --rc genhtml_legend=1 00:09:09.962 --rc geninfo_all_blocks=1 00:09:09.962 --rc geninfo_unexecuted_blocks=1 00:09:09.962 00:09:09.962 ' 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.962 --rc genhtml_branch_coverage=1 00:09:09.962 --rc genhtml_function_coverage=1 00:09:09.962 --rc genhtml_legend=1 00:09:09.962 --rc geninfo_all_blocks=1 00:09:09.962 --rc geninfo_unexecuted_blocks=1 00:09:09.962 00:09:09.962 ' 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.962 --rc genhtml_branch_coverage=1 00:09:09.962 --rc genhtml_function_coverage=1 00:09:09.962 --rc genhtml_legend=1 00:09:09.962 --rc geninfo_all_blocks=1 00:09:09.962 --rc geninfo_unexecuted_blocks=1 00:09:09.962 00:09:09.962 ' 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.962 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.963 18:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.868 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:11.869 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:11.869 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:11.869 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:11.869 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.869 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.128 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.128 18:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.128 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:09:12.129 00:09:12.129 --- 10.0.0.2 ping statistics --- 00:09:12.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.129 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:09:12.129 00:09:12.129 --- 10.0.0.1 ping statistics --- 00:09:12.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.129 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2374218 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2374218 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2374218 ']' 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.129 18:11:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.388 [2024-11-28 18:11:24.271698] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:12.388 [2024-11-28 18:11:24.271849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.388 [2024-11-28 18:11:24.412947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.647 [2024-11-28 18:11:24.554069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.647 [2024-11-28 18:11:24.554160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.647 [2024-11-28 18:11:24.554186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.647 [2024-11-28 18:11:24.554211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.647 [2024-11-28 18:11:24.554231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.647 [2024-11-28 18:11:24.557116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.647 [2024-11-28 18:11:24.557181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.647 [2024-11-28 18:11:24.557229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.647 [2024-11-28 18:11:24.557234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.211 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.470 [2024-11-28 18:11:25.503265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.470 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 Malloc0 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 [2024-11-28 18:11:25.610153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.729 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2374379 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2374381 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2374383 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.730 { 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme$subsystem", 00:09:13.730 "trtype": "$TEST_TRANSPORT", 00:09:13.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "$NVMF_PORT", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.730 "hdgst": ${hdgst:-false}, 00:09:13.730 "ddgst": ${ddgst:-false} 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 } 00:09:13.730 EOF 00:09:13.730 )") 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2374385 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.730 { 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme$subsystem", 00:09:13.730 "trtype": "$TEST_TRANSPORT", 00:09:13.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "$NVMF_PORT", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.730 "hdgst": ${hdgst:-false}, 00:09:13.730 "ddgst": ${ddgst:-false} 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 } 00:09:13.730 EOF 00:09:13.730 )") 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.730 { 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme$subsystem", 00:09:13.730 "trtype": "$TEST_TRANSPORT", 00:09:13.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "$NVMF_PORT", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.730 "hdgst": ${hdgst:-false}, 00:09:13.730 "ddgst": ${ddgst:-false} 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 } 00:09:13.730 EOF 00:09:13.730 )") 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:13.730 { 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme$subsystem", 00:09:13.730 "trtype": "$TEST_TRANSPORT", 00:09:13.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "$NVMF_PORT", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.730 "hdgst": ${hdgst:-false}, 00:09:13.730 "ddgst": ${ddgst:-false} 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 } 00:09:13.730 EOF 00:09:13.730 )") 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2374379 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme1", 00:09:13.730 "trtype": "tcp", 00:09:13.730 "traddr": "10.0.0.2", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "4420", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.730 "hdgst": false, 00:09:13.730 "ddgst": false 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 }' 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme1", 00:09:13.730 "trtype": "tcp", 00:09:13.730 "traddr": "10.0.0.2", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "4420", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.730 "hdgst": false, 00:09:13.730 "ddgst": false 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 }' 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme1", 00:09:13.730 "trtype": "tcp", 00:09:13.730 "traddr": "10.0.0.2", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "4420", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.730 "hdgst": false, 00:09:13.730 "ddgst": false 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 }' 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:13.730 18:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:13.730 "params": { 00:09:13.730 "name": "Nvme1", 00:09:13.730 "trtype": "tcp", 00:09:13.730 "traddr": "10.0.0.2", 00:09:13.730 "adrfam": "ipv4", 00:09:13.730 "trsvcid": "4420", 00:09:13.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.730 "hdgst": false, 00:09:13.730 "ddgst": false 00:09:13.730 }, 00:09:13.730 "method": "bdev_nvme_attach_controller" 00:09:13.730 }' 00:09:13.730 [2024-11-28 18:11:25.700844] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:13.730 [2024-11-28 18:11:25.700847] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:13.730 [2024-11-28 18:11:25.700844] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:13.730 [2024-11-28 18:11:25.701009] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-28 18:11:25.701010] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-11-28 18:11:25.701012] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:13.730 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:13.730 --proc-type=auto ] 00:09:13.730 [2024-11-28 18:11:25.702349] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:13.731 [2024-11-28 18:11:25.702519] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:13.989 [2024-11-28 18:11:25.951949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.989 [2024-11-28 18:11:26.054341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.989 [2024-11-28 18:11:26.073002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:14.248 [2024-11-28 18:11:26.154264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.248 [2024-11-28 18:11:26.178323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:14.248 [2024-11-28 18:11:26.231252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.248 [2024-11-28 18:11:26.276567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.248 [2024-11-28 18:11:26.348448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:14.508 Running I/O for 1 seconds... 00:09:14.767 Running I/O for 1 seconds... 00:09:14.767 Running I/O for 1 seconds... 00:09:14.767 Running I/O for 1 seconds... 00:09:15.704 5259.00 IOPS, 20.54 MiB/s 00:09:15.704 Latency(us) 00:09:15.704 [2024-11-28T17:11:27.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.704 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:15.704 Nvme1n1 : 1.03 5239.38 20.47 0.00 0.00 24191.49 7184.69 48545.19 00:09:15.704 [2024-11-28T17:11:27.804Z] =================================================================================================================== 00:09:15.704 [2024-11-28T17:11:27.804Z] Total : 5239.38 20.47 0.00 0.00 24191.49 7184.69 48545.19 00:09:15.704 140312.00 IOPS, 548.09 MiB/s 00:09:15.704 Latency(us) 00:09:15.704 [2024-11-28T17:11:27.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.704 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:15.704 Nvme1n1 : 1.00 139998.84 546.87 0.00 0.00 909.45 449.04 2208.81 00:09:15.704 [2024-11-28T17:11:27.804Z] =================================================================================================================== 00:09:15.704 [2024-11-28T17:11:27.804Z] Total : 139998.84 546.87 0.00 0.00 909.45 449.04 2208.81 00:09:15.963 5207.00 IOPS, 20.34 MiB/s 00:09:15.963 Latency(us) 00:09:15.963 [2024-11-28T17:11:28.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.963 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:15.963 Nvme1n1 : 1.01 5300.68 20.71 0.00 0.00 24031.72 7378.87 57865.86 00:09:15.963 [2024-11-28T17:11:28.063Z] =================================================================================================================== 00:09:15.963 [2024-11-28T17:11:28.063Z] Total : 5300.68 20.71 0.00 0.00 24031.72 7378.87 57865.86 00:09:15.963 6596.00 IOPS, 25.77 MiB/s 00:09:15.963 Latency(us) 00:09:15.963 [2024-11-28T17:11:28.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.963 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:15.963 Nvme1n1 : 1.01 6655.47 26.00 0.00 0.00 19117.28 9223.59 39418.69 00:09:15.963 [2024-11-28T17:11:28.063Z] =================================================================================================================== 00:09:15.963 [2024-11-28T17:11:28.063Z] Total : 6655.47 26.00 0.00 0.00 19117.28 9223.59 39418.69 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2374381 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2374383 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2374385 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.530 rmmod nvme_tcp 00:09:16.530 rmmod nvme_fabrics 00:09:16.530 rmmod nvme_keyring 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2374218 ']' 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2374218 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2374218 ']' 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2374218 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2374218 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2374218' 00:09:16.530 killing process with pid 2374218 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2374218 00:09:16.530 18:11:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2374218 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.906 18:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.810 00:09:19.810 real 0m10.045s 00:09:19.810 user 0m28.117s 00:09:19.810 sys 0m4.302s 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.810 ************************************ 00:09:19.810 END TEST nvmf_bdev_io_wait 00:09:19.810 ************************************ 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.810 ************************************ 00:09:19.810 START TEST nvmf_queue_depth 00:09:19.810 ************************************ 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:19.810 * Looking for test storage... 00:09:19.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.810 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.811 --rc genhtml_branch_coverage=1 00:09:19.811 --rc genhtml_function_coverage=1 00:09:19.811 --rc genhtml_legend=1 00:09:19.811 --rc geninfo_all_blocks=1 00:09:19.811 --rc geninfo_unexecuted_blocks=1 00:09:19.811 00:09:19.811 ' 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.811 --rc genhtml_branch_coverage=1 00:09:19.811 --rc genhtml_function_coverage=1 00:09:19.811 --rc genhtml_legend=1 00:09:19.811 --rc geninfo_all_blocks=1 00:09:19.811 --rc geninfo_unexecuted_blocks=1 00:09:19.811 00:09:19.811 ' 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.811 --rc genhtml_branch_coverage=1 00:09:19.811 --rc genhtml_function_coverage=1 00:09:19.811 --rc genhtml_legend=1 00:09:19.811 --rc geninfo_all_blocks=1 00:09:19.811 --rc geninfo_unexecuted_blocks=1 00:09:19.811 00:09:19.811 ' 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.811 --rc genhtml_branch_coverage=1 00:09:19.811 --rc genhtml_function_coverage=1 00:09:19.811 --rc genhtml_legend=1 00:09:19.811 --rc geninfo_all_blocks=1 00:09:19.811 --rc geninfo_unexecuted_blocks=1 00:09:19.811 00:09:19.811 ' 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.811 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:20.070 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.071 18:11:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:22.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:22.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:22.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:22.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.063 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:09:22.322 00:09:22.322 --- 10.0.0.2 ping statistics --- 00:09:22.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.322 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:22.322 00:09:22.322 --- 10.0.0.1 ping statistics --- 00:09:22.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.322 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2376859 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2376859 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2376859 ']' 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.322 18:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.322 [2024-11-28 18:11:34.338288] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:22.322 [2024-11-28 18:11:34.338434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.581 [2024-11-28 18:11:34.484329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.581 [2024-11-28 18:11:34.617516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.581 [2024-11-28 18:11:34.617609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.581 [2024-11-28 18:11:34.617634] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.581 [2024-11-28 18:11:34.617662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.581 [2024-11-28 18:11:34.617687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.581 [2024-11-28 18:11:34.619337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 [2024-11-28 18:11:35.325010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 Malloc0 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 [2024-11-28 18:11:35.436482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2377035 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2377035 /var/tmp/bdevperf.sock 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2377035 ']' 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:23.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.519 18:11:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.519 [2024-11-28 18:11:35.523875] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:23.519 [2024-11-28 18:11:35.524017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377035 ] 00:09:23.777 [2024-11-28 18:11:35.667789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.777 [2024-11-28 18:11:35.804444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.713 NVMe0n1 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.713 18:11:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:24.713 Running I/O for 10 seconds... 00:09:27.029 5645.00 IOPS, 22.05 MiB/s [2024-11-28T17:11:40.065Z] 5804.00 IOPS, 22.67 MiB/s [2024-11-28T17:11:41.002Z] 5869.67 IOPS, 22.93 MiB/s [2024-11-28T17:11:41.937Z] 5891.00 IOPS, 23.01 MiB/s [2024-11-28T17:11:42.875Z] 5936.00 IOPS, 23.19 MiB/s [2024-11-28T17:11:43.812Z] 5969.67 IOPS, 23.32 MiB/s [2024-11-28T17:11:45.191Z] 5990.14 IOPS, 23.40 MiB/s [2024-11-28T17:11:45.758Z] 6001.88 IOPS, 23.44 MiB/s [2024-11-28T17:11:47.133Z] 5979.67 IOPS, 23.36 MiB/s [2024-11-28T17:11:47.133Z] 5974.70 IOPS, 23.34 MiB/s 00:09:35.033 Latency(us) 00:09:35.033 [2024-11-28T17:11:47.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.034 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:35.034 Verification LBA range: start 0x0 length 0x4000 00:09:35.034 NVMe0n1 : 10.12 6004.81 23.46 0.00 0.00 169345.54 24855.13 100973.99 00:09:35.034 [2024-11-28T17:11:47.134Z] =================================================================================================================== 00:09:35.034 [2024-11-28T17:11:47.134Z] Total : 6004.81 23.46 0.00 0.00 169345.54 24855.13 100973.99 00:09:35.034 { 00:09:35.034 "results": [ 00:09:35.034 { 00:09:35.034 "job": "NVMe0n1", 00:09:35.034 "core_mask": "0x1", 00:09:35.034 "workload": "verify", 00:09:35.034 "status": "finished", 00:09:35.034 "verify_range": { 00:09:35.034 "start": 0, 00:09:35.034 "length": 16384 00:09:35.034 }, 00:09:35.034 "queue_depth": 1024, 00:09:35.034 "io_size": 4096, 00:09:35.034 "runtime": 10.120385, 00:09:35.034 "iops": 6004.811081791849, 00:09:35.034 "mibps": 23.45629328824941, 00:09:35.034 "io_failed": 0, 00:09:35.034 "io_timeout": 0, 00:09:35.034 "avg_latency_us": 169345.5365239146, 00:09:35.034 "min_latency_us": 24855.134814814814, 00:09:35.034 "max_latency_us": 100973.98518518519 00:09:35.034 } 00:09:35.034 ], 00:09:35.034 "core_count": 1 00:09:35.034 } 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2377035 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2377035 ']' 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2377035 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2377035 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2377035' 00:09:35.034 killing process with pid 2377035 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2377035 00:09:35.034 Received shutdown signal, test time was about 10.000000 seconds 00:09:35.034 00:09:35.034 Latency(us) 00:09:35.034 [2024-11-28T17:11:47.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:35.034 [2024-11-28T17:11:47.134Z] =================================================================================================================== 00:09:35.034 [2024-11-28T17:11:47.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:35.034 18:11:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2377035 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.969 rmmod nvme_tcp 00:09:35.969 rmmod nvme_fabrics 00:09:35.969 rmmod nvme_keyring 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2376859 ']' 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2376859 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2376859 ']' 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2376859 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376859 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376859' 00:09:35.969 killing process with pid 2376859 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2376859 00:09:35.969 18:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2376859 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.343 18:11:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.249 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.249 00:09:39.249 real 0m19.591s 00:09:39.249 user 0m27.763s 00:09:39.249 sys 0m3.370s 00:09:39.249 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.249 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.249 ************************************ 00:09:39.249 END TEST nvmf_queue_depth 00:09:39.249 ************************************ 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.506 ************************************ 00:09:39.506 START TEST nvmf_target_multipath 00:09:39.506 ************************************ 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:39.506 * Looking for test storage... 00:09:39.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.506 --rc genhtml_branch_coverage=1 00:09:39.506 --rc genhtml_function_coverage=1 00:09:39.506 --rc genhtml_legend=1 00:09:39.506 --rc geninfo_all_blocks=1 00:09:39.506 --rc geninfo_unexecuted_blocks=1 00:09:39.506 00:09:39.506 ' 00:09:39.506 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.506 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.507 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.507 --rc genhtml_branch_coverage=1 00:09:39.507 --rc genhtml_function_coverage=1 00:09:39.507 --rc genhtml_legend=1 00:09:39.507 --rc geninfo_all_blocks=1 00:09:39.507 --rc geninfo_unexecuted_blocks=1 00:09:39.507 00:09:39.507 ' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.507 18:11:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:42.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:42.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.036 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:42.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:42.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:09:42.037 00:09:42.037 --- 10.0.0.2 ping statistics --- 00:09:42.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.037 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:42.037 00:09:42.037 --- 10.0.0.1 ping statistics --- 00:09:42.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.037 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:42.037 only one NIC for nvmf test 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.037 rmmod nvme_tcp 00:09:42.037 rmmod nvme_fabrics 00:09:42.037 rmmod nvme_keyring 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.037 18:11:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.940 00:09:43.940 real 0m4.477s 00:09:43.940 user 0m0.951s 00:09:43.940 sys 0m1.538s 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:43.940 ************************************ 00:09:43.940 END TEST nvmf_target_multipath 00:09:43.940 ************************************ 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.940 ************************************ 00:09:43.940 START TEST nvmf_zcopy 00:09:43.940 ************************************ 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:43.940 * Looking for test storage... 00:09:43.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.940 18:11:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.199 --rc genhtml_branch_coverage=1 00:09:44.199 --rc genhtml_function_coverage=1 00:09:44.199 --rc genhtml_legend=1 00:09:44.199 --rc geninfo_all_blocks=1 00:09:44.199 --rc geninfo_unexecuted_blocks=1 00:09:44.199 00:09:44.199 ' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.199 --rc genhtml_branch_coverage=1 00:09:44.199 --rc genhtml_function_coverage=1 00:09:44.199 --rc genhtml_legend=1 00:09:44.199 --rc geninfo_all_blocks=1 00:09:44.199 --rc geninfo_unexecuted_blocks=1 00:09:44.199 00:09:44.199 ' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.199 --rc genhtml_branch_coverage=1 00:09:44.199 --rc genhtml_function_coverage=1 00:09:44.199 --rc genhtml_legend=1 00:09:44.199 --rc geninfo_all_blocks=1 00:09:44.199 --rc geninfo_unexecuted_blocks=1 00:09:44.199 00:09:44.199 ' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.199 --rc genhtml_branch_coverage=1 00:09:44.199 --rc genhtml_function_coverage=1 00:09:44.199 --rc genhtml_legend=1 00:09:44.199 --rc geninfo_all_blocks=1 00:09:44.199 --rc geninfo_unexecuted_blocks=1 00:09:44.199 00:09:44.199 ' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.199 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.200 18:11:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:46.104 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:46.104 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:46.104 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:46.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.104 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:09:46.364 00:09:46.364 --- 10.0.0.2 ping statistics --- 00:09:46.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.364 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:46.364 00:09:46.364 --- 10.0.0.1 ping statistics --- 00:09:46.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.364 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2382515 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2382515 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2382515 ']' 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.364 18:11:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.364 [2024-11-28 18:11:58.382405] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:46.364 [2024-11-28 18:11:58.382564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.623 [2024-11-28 18:11:58.535023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.623 [2024-11-28 18:11:58.672531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.623 [2024-11-28 18:11:58.672628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.623 [2024-11-28 18:11:58.672655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.623 [2024-11-28 18:11:58.672680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.623 [2024-11-28 18:11:58.672701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.623 [2024-11-28 18:11:58.674370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 [2024-11-28 18:11:59.404432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 [2024-11-28 18:11:59.420668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 malloc0 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.561 { 00:09:47.561 "params": { 00:09:47.561 "name": "Nvme$subsystem", 00:09:47.561 "trtype": "$TEST_TRANSPORT", 00:09:47.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.561 "adrfam": "ipv4", 00:09:47.561 "trsvcid": "$NVMF_PORT", 00:09:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.561 "hdgst": ${hdgst:-false}, 00:09:47.561 "ddgst": ${ddgst:-false} 00:09:47.561 }, 00:09:47.561 "method": "bdev_nvme_attach_controller" 00:09:47.561 } 00:09:47.561 EOF 00:09:47.561 )") 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:47.561 18:11:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.561 "params": { 00:09:47.561 "name": "Nvme1", 00:09:47.561 "trtype": "tcp", 00:09:47.561 "traddr": "10.0.0.2", 00:09:47.561 "adrfam": "ipv4", 00:09:47.561 "trsvcid": "4420", 00:09:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.561 "hdgst": false, 00:09:47.561 "ddgst": false 00:09:47.561 }, 00:09:47.561 "method": "bdev_nvme_attach_controller" 00:09:47.561 }' 00:09:47.561 [2024-11-28 18:11:59.581776] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:47.561 [2024-11-28 18:11:59.581929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382669 ] 00:09:47.820 [2024-11-28 18:11:59.739309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.820 [2024-11-28 18:11:59.875888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.388 Running I/O for 10 seconds... 00:09:50.276 4083.00 IOPS, 31.90 MiB/s [2024-11-28T17:12:03.753Z] 4113.50 IOPS, 32.14 MiB/s [2024-11-28T17:12:04.690Z] 4144.67 IOPS, 32.38 MiB/s [2024-11-28T17:12:05.625Z] 4157.75 IOPS, 32.48 MiB/s [2024-11-28T17:12:06.561Z] 4161.20 IOPS, 32.51 MiB/s [2024-11-28T17:12:07.499Z] 4165.17 IOPS, 32.54 MiB/s [2024-11-28T17:12:08.434Z] 4167.00 IOPS, 32.55 MiB/s [2024-11-28T17:12:09.862Z] 4175.38 IOPS, 32.62 MiB/s [2024-11-28T17:12:10.429Z] 4178.89 IOPS, 32.65 MiB/s [2024-11-28T17:12:10.429Z] 4181.70 IOPS, 32.67 MiB/s 00:09:58.330 Latency(us) 00:09:58.330 [2024-11-28T17:12:10.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.330 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:58.330 Verification LBA range: start 0x0 length 0x1000 00:09:58.330 Nvme1n1 : 10.02 4184.42 32.69 0.00 0.00 30505.52 4708.88 41554.68 00:09:58.330 [2024-11-28T17:12:10.430Z] =================================================================================================================== 00:09:58.330 [2024-11-28T17:12:10.430Z] Total : 4184.42 32.69 0.00 0.00 30505.52 4708.88 41554.68 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2384618 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:59.266 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:59.267 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:59.267 { 00:09:59.267 "params": { 00:09:59.267 "name": "Nvme$subsystem", 00:09:59.267 "trtype": "$TEST_TRANSPORT", 00:09:59.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:59.267 "adrfam": "ipv4", 00:09:59.267 "trsvcid": "$NVMF_PORT", 00:09:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:59.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:59.267 "hdgst": ${hdgst:-false}, 00:09:59.267 "ddgst": ${ddgst:-false} 00:09:59.267 }, 00:09:59.267 "method": "bdev_nvme_attach_controller" 00:09:59.267 } 00:09:59.267 EOF 00:09:59.267 )") 00:09:59.267 [2024-11-28 18:12:11.310027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.310102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:59.267 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:59.267 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:59.267 18:12:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:59.267 "params": { 00:09:59.267 "name": "Nvme1", 00:09:59.267 "trtype": "tcp", 00:09:59.267 "traddr": "10.0.0.2", 00:09:59.267 "adrfam": "ipv4", 00:09:59.267 "trsvcid": "4420", 00:09:59.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:59.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:59.267 "hdgst": false, 00:09:59.267 "ddgst": false 00:09:59.267 }, 00:09:59.267 "method": "bdev_nvme_attach_controller" 00:09:59.267 }' 00:09:59.267 [2024-11-28 18:12:11.317940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.317976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 [2024-11-28 18:12:11.325927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.325961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 [2024-11-28 18:12:11.333946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.333975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 [2024-11-28 18:12:11.341998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.342032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 [2024-11-28 18:12:11.349996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.350032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 [2024-11-28 18:12:11.358009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.358036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.267 [2024-11-28 18:12:11.366035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.267 [2024-11-28 18:12:11.366066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.374044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.374091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.382108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.382139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.390105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.390135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.393272] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:59.525 [2024-11-28 18:12:11.393391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384618 ] 00:09:59.525 [2024-11-28 18:12:11.398149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.398178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.406167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.406197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.414189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.414218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.422267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.422298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.430259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.430290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.438270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.438298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.446276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.446304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.454291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.525 [2024-11-28 18:12:11.454319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.525 [2024-11-28 18:12:11.462309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.462338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.470347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.470390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.478362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.478405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.486397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.486424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.494418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.494451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.502432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.502465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.510474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.510507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.518489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.518522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.526521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.526554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.534573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.534606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.542539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.542572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.545213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.526 [2024-11-28 18:12:11.550597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.550630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.558617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.558652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.566684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.566737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.574666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.574700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.582664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.582697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.590717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.590750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.598732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.598765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.606732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.606766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.614775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.614809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.526 [2024-11-28 18:12:11.622815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.526 [2024-11-28 18:12:11.622853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.630838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.630876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.638846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.638882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.646839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.646870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.654889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.654923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.662921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.662955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.670919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.670956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.678996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.679030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.684702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.784 [2024-11-28 18:12:11.686979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.687013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.694989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.695024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.703154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.703204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.711168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.711220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.719079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.719122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.727124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.727179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.784 [2024-11-28 18:12:11.735111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.784 [2024-11-28 18:12:11.735155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.743161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.743189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.751179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.751207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.759183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.759212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.767247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.767276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.775254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.775295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.783329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.783396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.791347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.791412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.799385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.799451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.807406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.807470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.815346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.815392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.823395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.823422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.831395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.831422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.839412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.839455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.847438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.847466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.855491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.855525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.863488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.863521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.871532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.871565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.785 [2024-11-28 18:12:11.879551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.785 [2024-11-28 18:12:11.879584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.887592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.887629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.895613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.895648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.903609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.903643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.911653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.911687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.919692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.919725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.927687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.927721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.935798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.935851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.943823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.943876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.951829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.951883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.959827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.959869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.967802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.967836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.975835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.975868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.983866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.983900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.991872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.991913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:11.999905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:11.999939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.007925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.007958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.015957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.015990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.023968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.024001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.031982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.032014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.040018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.040051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.048041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.048076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.056049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.056083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.064187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.064221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.072120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.072164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.080184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.080216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.088191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.088222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.096207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.096239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.104247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.104279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.112244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.112289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.120259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.120289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.128306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.128334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.043 [2024-11-28 18:12:12.136316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.043 [2024-11-28 18:12:12.136345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.144348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.144403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.152390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.152429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.160405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.160454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.168434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.168483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.176475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.176510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.184485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.184519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.192534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.192568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.200587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.200624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.208564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.208595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.216577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.216606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.224599] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.224627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.232634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.232661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.240686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.240714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.248658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.248688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.257964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.257999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 [2024-11-28 18:12:12.264736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.264765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.302 Running I/O for 5 seconds... 00:10:00.302 [2024-11-28 18:12:12.272774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.302 [2024-11-28 18:12:12.272818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.288517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.288556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.302771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.302807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.317132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.317177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.331789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.331827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.346295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.346332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.360910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.360945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.375306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.375355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.303 [2024-11-28 18:12:12.390066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.303 [2024-11-28 18:12:12.390126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.405107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.405156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.419958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.419995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.434738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.434774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.449380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.449431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.464035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.464087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.478765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.478800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.493324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.493360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.507311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.507348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.521368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.521418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.536775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.536816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.550007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.550042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.565880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.565920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.581720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.581760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.597415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.597455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.613247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.613284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.628936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.628975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.644645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.644685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.561 [2024-11-28 18:12:12.661172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.561 [2024-11-28 18:12:12.661219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.676711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.676751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.688802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.688841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.703822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.703863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.719285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.719320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.734603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.734643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.749708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.749748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.764941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.764981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.780353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.780405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.796335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.796371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.808941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.808982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.825469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.825510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.838325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.838372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.854009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.854050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.869778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.869818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.883269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.883305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.898933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.898973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.821 [2024-11-28 18:12:12.914626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.821 [2024-11-28 18:12:12.914666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.080 [2024-11-28 18:12:12.928861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:12.928903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:12.943980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:12.944021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:12.959907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:12.959948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:12.972819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:12.972859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:12.988062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:12.988112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.003172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.003224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.018591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.018631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.034264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.034301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.047715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.047755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.062649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.062689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.077818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.077858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.092200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.092251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.107424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.107464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.123149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.123186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.138519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.138559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.154058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.154108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.081 [2024-11-28 18:12:13.167310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.081 [2024-11-28 18:12:13.167360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.340 [2024-11-28 18:12:13.183283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.340 [2024-11-28 18:12:13.183320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.340 [2024-11-28 18:12:13.197177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.340 [2024-11-28 18:12:13.197214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.340 [2024-11-28 18:12:13.212504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.340 [2024-11-28 18:12:13.212544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.228044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.228083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.243297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.243333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.258607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.258647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.273826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.273865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 8245.00 IOPS, 64.41 MiB/s [2024-11-28T17:12:13.441Z] [2024-11-28 18:12:13.288894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.288933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.305011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.305051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.320760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.320800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.336077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.336153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.351532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.351571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.366998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.367037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.382798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.382839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.398839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.398878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.413867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.413906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.341 [2024-11-28 18:12:13.429996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.341 [2024-11-28 18:12:13.430035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.446630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.446690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.463513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.463554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.479475] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.479515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.494914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.494954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.510041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.510080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.524765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.524806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.539823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.539859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.554917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.554952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.569721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.569756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.584492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.584526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.598834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.598869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.613638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.613674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.628325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.628362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.643031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.643066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.657744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.657779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.672029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.672064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.600 [2024-11-28 18:12:13.687017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.600 [2024-11-28 18:12:13.687052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.701837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.701874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.716037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.716073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.731001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.731045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.745580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.745616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.760090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.760136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.774324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.774360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.788369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.788421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.803013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.803048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.817533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.817568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.859 [2024-11-28 18:12:13.831772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.859 [2024-11-28 18:12:13.831807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.847352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.847408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.862522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.862558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.878036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.878087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.892241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.892278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.907288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.907340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.923018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.923058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.938165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.938208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.860 [2024-11-28 18:12:13.954295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.860 [2024-11-28 18:12:13.954331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:13.970270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:13.970324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:13.982851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:13.982892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:13.997853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:13.997904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.013683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.013733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.026921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.026961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.042222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.042258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.057723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.057763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.071318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.071358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.086821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.086861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.099704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.099743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.114441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.114476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.129116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.129168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.144654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.144694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.157696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.157735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.172446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.172485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.187849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.187889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.204246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.204282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.128 [2024-11-28 18:12:14.220327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.128 [2024-11-28 18:12:14.220364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.235708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.235749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.249858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.249898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.264613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.264652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.279146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.279181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 8311.00 IOPS, 64.93 MiB/s [2024-11-28T17:12:14.489Z] [2024-11-28 18:12:14.294344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.294406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.309973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.310012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.322922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.322962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.337770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.337810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.352941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.352980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.368248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.368284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.383215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.383264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.398704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.398743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.413751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.413791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.429057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.429106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.445002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.445042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.461337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.461399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.389 [2024-11-28 18:12:14.476839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.389 [2024-11-28 18:12:14.476879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.490557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.490598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.506163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.506199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.519603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.519643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.534909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.534948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.550720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.550760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.566146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.566182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.581830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.581870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.596824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.596865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.612141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.612194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.624578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.624618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.639229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.639265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.655019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.655059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.668080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.668145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.682873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.682912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.698308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.698344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.711163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.711199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.726274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.726310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.650 [2024-11-28 18:12:14.742082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.650 [2024-11-28 18:12:14.742145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.755033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.755074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.769777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.769817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.784398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.784454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.799722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.799761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.814787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.814826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.829953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.829993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.844237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.844272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.859777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.859817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.874657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.874697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.890478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.890517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.906178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.906214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.921328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.921393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.909 [2024-11-28 18:12:14.935855] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.909 [2024-11-28 18:12:14.935894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.910 [2024-11-28 18:12:14.951084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.910 [2024-11-28 18:12:14.951149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.910 [2024-11-28 18:12:14.966346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.910 [2024-11-28 18:12:14.966382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.910 [2024-11-28 18:12:14.981680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.910 [2024-11-28 18:12:14.981719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.910 [2024-11-28 18:12:14.995055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.910 [2024-11-28 18:12:14.995106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.010344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.010397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.025624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.025664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.040781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.040820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.056567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.056607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.071262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.071298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.087117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.087171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.102959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.103000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.118652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.118695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.131884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.131934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.146996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.147037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.162705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.162746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.179070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.179146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.194999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.195039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.210195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.210231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.225878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.225919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.241711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.241751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.171 [2024-11-28 18:12:15.257955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.171 [2024-11-28 18:12:15.258007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.272292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.272328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 8286.67 IOPS, 64.74 MiB/s [2024-11-28T17:12:15.533Z] [2024-11-28 18:12:15.287425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.287478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.302768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.302809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.316252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.316288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.331555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.331594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.346421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.346461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.362073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.362121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.378003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.378042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.393465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.393507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.408816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.408857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.424212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.424257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.439620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.439662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.455570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.455610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.470698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.470739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.486535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.486576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.502433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.502474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.433 [2024-11-28 18:12:15.518716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.433 [2024-11-28 18:12:15.518757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.693 [2024-11-28 18:12:15.533527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.533568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.548999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.549040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.564299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.564336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.579464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.579506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.594740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.594780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.609990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.610030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.625474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.625515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.637971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.638012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.652497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.652539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.667251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.667288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.682167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.682204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.697771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.697811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.713544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.713596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.728598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.728639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.743916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.743956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.759589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.759630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.774644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.774685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.694 [2024-11-28 18:12:15.790080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.694 [2024-11-28 18:12:15.790147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.805205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.805244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.820384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.820424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.835034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.835074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.850842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.850882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.863909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.863949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.878419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.878460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.894041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.894082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.907530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.907571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.922835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.922875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.937782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.937822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.955 [2024-11-28 18:12:15.952936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.955 [2024-11-28 18:12:15.952977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.956 [2024-11-28 18:12:15.968362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.956 [2024-11-28 18:12:15.968418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.956 [2024-11-28 18:12:15.983873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.956 [2024-11-28 18:12:15.983914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.956 [2024-11-28 18:12:15.999140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.956 [2024-11-28 18:12:15.999177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.956 [2024-11-28 18:12:16.012398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.956 [2024-11-28 18:12:16.012435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.956 [2024-11-28 18:12:16.027790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.956 [2024-11-28 18:12:16.027830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.956 [2024-11-28 18:12:16.043016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.956 [2024-11-28 18:12:16.043056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.058716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.058760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.072155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.072193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.087105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.087160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.102424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.102465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.117674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.117715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.132738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.132779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.147654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.147695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.163470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.163511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.178533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.178573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.193649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.215 [2024-11-28 18:12:16.193690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.215 [2024-11-28 18:12:16.209253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.209291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.216 [2024-11-28 18:12:16.224863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.224917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.216 [2024-11-28 18:12:16.240402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.240444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.216 [2024-11-28 18:12:16.256284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.256323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.216 [2024-11-28 18:12:16.271465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.271518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.216 8282.75 IOPS, 64.71 MiB/s [2024-11-28T17:12:16.316Z] [2024-11-28 18:12:16.286471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.286523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.216 [2024-11-28 18:12:16.301384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.216 [2024-11-28 18:12:16.301421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.475 [2024-11-28 18:12:16.316460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.475 [2024-11-28 18:12:16.316499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.475 [2024-11-28 18:12:16.331965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.475 [2024-11-28 18:12:16.332007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.475 [2024-11-28 18:12:16.348142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.475 [2024-11-28 18:12:16.348178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.475 [2024-11-28 18:12:16.363532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.475 [2024-11-28 18:12:16.363572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.475 [2024-11-28 18:12:16.378752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.475 [2024-11-28 18:12:16.378792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.475 [2024-11-28 18:12:16.394393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.394434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.407733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.407774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.422616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.422656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.437999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.438040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.451171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.451207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.466446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.466486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.481548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.481589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.496335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.496372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.511262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.511299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.526686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.526727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.542284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.542321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.557700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.557740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.476 [2024-11-28 18:12:16.573488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.476 [2024-11-28 18:12:16.573529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.588702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.588743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.603735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.603776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.618831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.618871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.634545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.634586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.650232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.650269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.665576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.665617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.681344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.681396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.697148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.697184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.713242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.713278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.728890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.728930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.743851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.743892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.759446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.759487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.775082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.775132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.790631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.790672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.806031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.806071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.736 [2024-11-28 18:12:16.821484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.736 [2024-11-28 18:12:16.821524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.997 [2024-11-28 18:12:16.836972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.837013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.852370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.852432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.867852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.867892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.883513] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.883554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.898674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.898714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.914845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.914885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.930661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.930702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.946577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.946618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.962372] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.962408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.977770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.977810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:16.993420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:16.993474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:17.008335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:17.008387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:17.023130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:17.023185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:17.039122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:17.039175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:17.055448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:17.055489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:17.070486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:17.070527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.998 [2024-11-28 18:12:17.085100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.998 [2024-11-28 18:12:17.085154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.100727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.100768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.115770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.115811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.130619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.130660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.145743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.145793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.161112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.161181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.176921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.176961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.192535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.192576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.207476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.207531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.222232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.222269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.237552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.237593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.252820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.252861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.267957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.267998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.280272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.280308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 8273.60 IOPS, 64.64 MiB/s [2024-11-28T17:12:17.358Z] [2024-11-28 18:12:17.295267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.295303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.301254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.301287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 00:10:05.258 Latency(us) 00:10:05.258 [2024-11-28T17:12:17.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.258 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:05.258 Nvme1n1 : 5.01 8276.16 64.66 0.00 0.00 15438.36 6407.96 25243.50 00:10:05.258 [2024-11-28T17:12:17.358Z] =================================================================================================================== 00:10:05.258 [2024-11-28T17:12:17.358Z] Total : 8276.16 64.66 0.00 0.00 15438.36 6407.96 25243.50 00:10:05.258 [2024-11-28 18:12:17.309334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.309383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.317280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.317312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.325342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.325396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.333328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.333361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.341419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.341455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.349459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.349506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.258 [2024-11-28 18:12:17.357506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.258 [2024-11-28 18:12:17.357565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.365582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.365654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.373479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.373524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.381498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.381532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.389543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.389577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.397524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.397556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.405573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.405607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.413615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.413649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.421644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.421679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.429651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.429685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.437669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.437703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.445678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.445711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.453802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.453857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.461831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.461896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.469881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.469942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.477797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.477832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.485795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.485829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.493865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.493899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.501857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.501891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.509901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.509935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.517923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.517957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.525908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.525941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.533953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.533986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.541972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.542005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.549979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.550013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.558019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.558052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.566060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.519 [2024-11-28 18:12:17.566102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.519 [2024-11-28 18:12:17.574017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.520 [2024-11-28 18:12:17.574045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.520 [2024-11-28 18:12:17.582088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.520 [2024-11-28 18:12:17.582155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.520 [2024-11-28 18:12:17.594167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.520 [2024-11-28 18:12:17.594199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.520 [2024-11-28 18:12:17.602182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.520 [2024-11-28 18:12:17.602212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.520 [2024-11-28 18:12:17.610188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.520 [2024-11-28 18:12:17.610217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.520 [2024-11-28 18:12:17.618215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.520 [2024-11-28 18:12:17.618255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.626236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.626266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.634251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.634282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.642364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.642424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.650370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.650435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.658278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.658308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.666310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.666339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.674339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.674369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.682355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.682399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.690401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.690430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.698545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.698602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.706542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.706601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.714609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.714670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.722565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.722617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.730529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.730556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.738537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.738565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.746538] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.746565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.754573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.754600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.762578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.762616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.770585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.770614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.778629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.778656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.786648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.786675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.794669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.794696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.802700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.802729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.810731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.810760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.818742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.818769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.826762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.826790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.834773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.834800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.781 [2024-11-28 18:12:17.842812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.781 [2024-11-28 18:12:17.842839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.782 [2024-11-28 18:12:17.850822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.782 [2024-11-28 18:12:17.850851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.782 [2024-11-28 18:12:17.858879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.782 [2024-11-28 18:12:17.858909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.782 [2024-11-28 18:12:17.867018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.782 [2024-11-28 18:12:17.867076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.782 [2024-11-28 18:12:17.875004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.782 [2024-11-28 18:12:17.875073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.882929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.882957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.890957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.890984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.898947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.898974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.907006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.907034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.914997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.915024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.923059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.923102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.931083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.931141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.939113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.939159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.947154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.947189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.955167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.955196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.963175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.963203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.971211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.971240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.979210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.979239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.987365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.987425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:17.995273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:17.995302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:18.003289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:18.003316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:18.011307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:18.011335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:18.019326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:18.019353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.043 [2024-11-28 18:12:18.027334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.043 [2024-11-28 18:12:18.027362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.035387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.035414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.043388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.043416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.051541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.051575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.059472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.059505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.067474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.067508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.075492] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.075523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.083657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.083722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.091545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.091579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.099606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.099648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.107600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.107633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.115631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.115664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.123642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.123674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.131651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.131683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.044 [2024-11-28 18:12:18.139692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.044 [2024-11-28 18:12:18.139725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.147726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.147761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.155723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.155756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.163760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.163792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.171779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.171812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.179833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.179873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.187832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.187864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.195864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.195896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.203888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.203921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.211904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.211937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.219906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.219938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 [2024-11-28 18:12:18.227952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.305 [2024-11-28 18:12:18.227985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2384618) - No such process 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2384618 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.305 delay0 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.305 18:12:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:06.566 [2024-11-28 18:12:18.453257] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:13.143 Initializing NVMe Controllers 00:10:13.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.143 Initialization complete. Launching workers. 00:10:13.143 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 64 00:10:13.143 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 351, failed to submit 33 00:10:13.143 success 164, unsuccessful 187, failed 0 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.143 rmmod nvme_tcp 00:10:13.143 rmmod nvme_fabrics 00:10:13.143 rmmod nvme_keyring 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2382515 ']' 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2382515 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2382515 ']' 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2382515 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382515 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382515' 00:10:13.143 killing process with pid 2382515 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2382515 00:10:13.143 18:12:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2382515 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.082 18:12:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.989 18:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.989 00:10:15.989 real 0m32.077s 00:10:15.989 user 0m48.087s 00:10:15.989 sys 0m8.153s 00:10:15.989 18:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.989 18:12:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.989 ************************************ 00:10:15.989 END TEST nvmf_zcopy 00:10:15.989 ************************************ 00:10:15.989 18:12:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.989 18:12:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.989 18:12:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.990 18:12:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.990 ************************************ 00:10:15.990 START TEST nvmf_nmic 00:10:15.990 ************************************ 00:10:15.990 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:16.249 * Looking for test storage... 00:10:16.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.249 --rc genhtml_branch_coverage=1 00:10:16.249 --rc genhtml_function_coverage=1 00:10:16.249 --rc genhtml_legend=1 00:10:16.249 --rc geninfo_all_blocks=1 00:10:16.249 --rc geninfo_unexecuted_blocks=1 00:10:16.249 00:10:16.249 ' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.249 --rc genhtml_branch_coverage=1 00:10:16.249 --rc genhtml_function_coverage=1 00:10:16.249 --rc genhtml_legend=1 00:10:16.249 --rc geninfo_all_blocks=1 00:10:16.249 --rc geninfo_unexecuted_blocks=1 00:10:16.249 00:10:16.249 ' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.249 --rc genhtml_branch_coverage=1 00:10:16.249 --rc genhtml_function_coverage=1 00:10:16.249 --rc genhtml_legend=1 00:10:16.249 --rc geninfo_all_blocks=1 00:10:16.249 --rc geninfo_unexecuted_blocks=1 00:10:16.249 00:10:16.249 ' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:16.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.249 --rc genhtml_branch_coverage=1 00:10:16.249 --rc genhtml_function_coverage=1 00:10:16.249 --rc genhtml_legend=1 00:10:16.249 --rc geninfo_all_blocks=1 00:10:16.249 --rc geninfo_unexecuted_blocks=1 00:10:16.249 00:10:16.249 ' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.249 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.250 18:12:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:18.791 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:18.791 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:18.791 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:18.791 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.791 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:10:18.792 00:10:18.792 --- 10.0.0.2 ping statistics --- 00:10:18.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.792 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:10:18.792 00:10:18.792 --- 10.0.0.1 ping statistics --- 00:10:18.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.792 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2388282 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2388282 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2388282 ']' 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.792 18:12:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.792 [2024-11-28 18:12:30.567729] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:18.792 [2024-11-28 18:12:30.567881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.792 [2024-11-28 18:12:30.725072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.792 [2024-11-28 18:12:30.872002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.792 [2024-11-28 18:12:30.872086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.792 [2024-11-28 18:12:30.872128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.792 [2024-11-28 18:12:30.872159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.792 [2024-11-28 18:12:30.872179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.792 [2024-11-28 18:12:30.875058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.792 [2024-11-28 18:12:30.875128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.792 [2024-11-28 18:12:30.875165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.792 [2024-11-28 18:12:30.875172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 [2024-11-28 18:12:31.542371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 Malloc0 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 [2024-11-28 18:12:31.661800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:19.730 test case1: single bdev can't be used in multiple subsystems 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 [2024-11-28 18:12:31.685554] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:19.730 [2024-11-28 18:12:31.685609] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:19.730 [2024-11-28 18:12:31.685638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.730 request: 00:10:19.730 { 00:10:19.730 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:19.730 "namespace": { 00:10:19.730 "bdev_name": "Malloc0", 00:10:19.730 "no_auto_visible": false, 00:10:19.730 "hide_metadata": false 00:10:19.730 }, 00:10:19.730 "method": "nvmf_subsystem_add_ns", 00:10:19.730 "req_id": 1 00:10:19.730 } 00:10:19.730 Got JSON-RPC error response 00:10:19.730 response: 00:10:19.730 { 00:10:19.730 "code": -32602, 00:10:19.730 "message": "Invalid parameters" 00:10:19.730 } 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:19.730 Adding namespace failed - expected result. 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:19.730 test case2: host connect to nvmf target in multiple paths 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.730 [2024-11-28 18:12:31.697708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.730 18:12:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.298 18:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:20.869 18:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.870 18:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:20.870 18:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.870 18:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:20.870 18:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:23.412 18:12:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:23.412 [global] 00:10:23.412 thread=1 00:10:23.412 invalidate=1 00:10:23.412 rw=write 00:10:23.412 time_based=1 00:10:23.412 runtime=1 00:10:23.412 ioengine=libaio 00:10:23.412 direct=1 00:10:23.412 bs=4096 00:10:23.412 iodepth=1 00:10:23.412 norandommap=0 00:10:23.412 numjobs=1 00:10:23.412 00:10:23.412 verify_dump=1 00:10:23.412 verify_backlog=512 00:10:23.412 verify_state_save=0 00:10:23.412 do_verify=1 00:10:23.412 verify=crc32c-intel 00:10:23.412 [job0] 00:10:23.412 filename=/dev/nvme0n1 00:10:23.412 Could not set queue depth (nvme0n1) 00:10:23.412 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:23.412 fio-3.35 00:10:23.412 Starting 1 thread 00:10:24.351 00:10:24.351 job0: (groupid=0, jobs=1): err= 0: pid=2388933: Thu Nov 28 18:12:36 2024 00:10:24.351 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:24.351 slat (nsec): min=5732, max=52757, avg=14750.06, stdev=5475.10 00:10:24.351 clat (usec): min=232, max=2171, avg=292.44, stdev=61.51 00:10:24.351 lat (usec): min=239, max=2179, avg=307.19, stdev=62.75 00:10:24.351 clat percentiles (usec): 00:10:24.351 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 269], 00:10:24.351 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:24.351 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 351], 00:10:24.351 | 99.00th=[ 379], 99.50th=[ 424], 99.90th=[ 1029], 99.95th=[ 2180], 00:10:24.351 | 99.99th=[ 2180] 00:10:24.351 write: IOPS=1994, BW=7976KiB/s (8167kB/s)(7984KiB/1001msec); 0 zone resets 00:10:24.351 slat (usec): min=9, max=28327, avg=31.54, stdev=633.70 00:10:24.351 clat (usec): min=167, max=413, avg=225.27, stdev=31.16 00:10:24.351 lat (usec): min=176, max=28657, avg=256.81, stdev=636.97 00:10:24.351 clat percentiles (usec): 00:10:24.351 | 1.00th=[ 174], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:10:24.351 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 231], 00:10:24.351 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:10:24.351 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 412], 00:10:24.351 | 99.99th=[ 412] 00:10:24.351 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:24.351 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:24.351 lat (usec) : 250=46.83%, 500=53.06%, 750=0.03%, 1000=0.03% 00:10:24.351 lat (msec) : 2=0.03%, 4=0.03% 00:10:24.351 cpu : usr=4.80%, sys=7.20%, ctx=3534, majf=0, minf=1 00:10:24.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:24.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.351 issued rwts: total=1536,1996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:24.351 00:10:24.351 Run status group 0 (all jobs): 00:10:24.351 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:10:24.351 WRITE: bw=7976KiB/s (8167kB/s), 7976KiB/s-7976KiB/s (8167kB/s-8167kB/s), io=7984KiB (8176kB), run=1001-1001msec 00:10:24.351 00:10:24.351 Disk stats (read/write): 00:10:24.351 nvme0n1: ios=1563/1536, merge=0/0, ticks=1021/332, in_queue=1353, util=98.70% 00:10:24.351 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.610 rmmod nvme_tcp 00:10:24.610 rmmod nvme_fabrics 00:10:24.610 rmmod nvme_keyring 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2388282 ']' 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2388282 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2388282 ']' 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2388282 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2388282 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2388282' 00:10:24.610 killing process with pid 2388282 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2388282 00:10:24.610 18:12:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2388282 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.992 18:12:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.536 00:10:28.536 real 0m12.032s 00:10:28.536 user 0m28.499s 00:10:28.536 sys 0m2.770s 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.536 ************************************ 00:10:28.536 END TEST nvmf_nmic 00:10:28.536 ************************************ 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.536 ************************************ 00:10:28.536 START TEST nvmf_fio_target 00:10:28.536 ************************************ 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:28.536 * Looking for test storage... 00:10:28.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.536 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.537 --rc genhtml_branch_coverage=1 00:10:28.537 --rc genhtml_function_coverage=1 00:10:28.537 --rc genhtml_legend=1 00:10:28.537 --rc geninfo_all_blocks=1 00:10:28.537 --rc geninfo_unexecuted_blocks=1 00:10:28.537 00:10:28.537 ' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.537 --rc genhtml_branch_coverage=1 00:10:28.537 --rc genhtml_function_coverage=1 00:10:28.537 --rc genhtml_legend=1 00:10:28.537 --rc geninfo_all_blocks=1 00:10:28.537 --rc geninfo_unexecuted_blocks=1 00:10:28.537 00:10:28.537 ' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.537 --rc genhtml_branch_coverage=1 00:10:28.537 --rc genhtml_function_coverage=1 00:10:28.537 --rc genhtml_legend=1 00:10:28.537 --rc geninfo_all_blocks=1 00:10:28.537 --rc geninfo_unexecuted_blocks=1 00:10:28.537 00:10:28.537 ' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.537 --rc genhtml_branch_coverage=1 00:10:28.537 --rc genhtml_function_coverage=1 00:10:28.537 --rc genhtml_legend=1 00:10:28.537 --rc geninfo_all_blocks=1 00:10:28.537 --rc geninfo_unexecuted_blocks=1 00:10:28.537 00:10:28.537 ' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.537 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.538 18:12:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.447 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:30.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:30.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:30.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:30.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:10:30.448 00:10:30.448 --- 10.0.0.2 ping statistics --- 00:10:30.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.448 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:30.448 00:10:30.448 --- 10.0.0.1 ping statistics --- 00:10:30.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.448 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.448 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2391260 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2391260 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2391260 ']' 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.710 18:12:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.710 [2024-11-28 18:12:42.661549] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:30.710 [2024-11-28 18:12:42.661707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.970 [2024-11-28 18:12:42.826982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.970 [2024-11-28 18:12:42.968000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.970 [2024-11-28 18:12:42.968063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.970 [2024-11-28 18:12:42.968088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.970 [2024-11-28 18:12:42.968133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.970 [2024-11-28 18:12:42.968153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.970 [2024-11-28 18:12:42.970967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.970 [2024-11-28 18:12:42.971032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.970 [2024-11-28 18:12:42.971083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.970 [2024-11-28 18:12:42.971089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.912 18:12:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:31.912 [2024-11-28 18:12:43.981214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.172 18:12:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.431 18:12:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:32.431 18:12:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.690 18:12:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:32.690 18:12:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.951 18:12:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:32.951 18:12:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.576 18:12:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:33.576 18:12:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:33.834 18:12:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.093 18:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:34.093 18:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.351 18:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:34.351 18:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.921 18:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:34.921 18:12:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:34.921 18:12:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.487 18:12:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.487 18:12:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.487 18:12:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:35.487 18:12:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.744 18:12:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:36.308 [2024-11-28 18:12:48.104592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.308 18:12:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:36.566 18:12:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:36.826 18:12:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:37.395 18:12:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:37.395 18:12:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:37.395 18:12:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.395 18:12:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:37.395 18:12:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:37.395 18:12:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:39.302 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:39.302 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:39.302 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:39.560 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:39.560 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:39.560 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:39.560 18:12:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:39.560 [global] 00:10:39.560 thread=1 00:10:39.560 invalidate=1 00:10:39.560 rw=write 00:10:39.560 time_based=1 00:10:39.560 runtime=1 00:10:39.560 ioengine=libaio 00:10:39.560 direct=1 00:10:39.560 bs=4096 00:10:39.560 iodepth=1 00:10:39.560 norandommap=0 00:10:39.560 numjobs=1 00:10:39.560 00:10:39.560 verify_dump=1 00:10:39.560 verify_backlog=512 00:10:39.560 verify_state_save=0 00:10:39.560 do_verify=1 00:10:39.560 verify=crc32c-intel 00:10:39.560 [job0] 00:10:39.560 filename=/dev/nvme0n1 00:10:39.560 [job1] 00:10:39.560 filename=/dev/nvme0n2 00:10:39.560 [job2] 00:10:39.560 filename=/dev/nvme0n3 00:10:39.560 [job3] 00:10:39.560 filename=/dev/nvme0n4 00:10:39.560 Could not set queue depth (nvme0n1) 00:10:39.560 Could not set queue depth (nvme0n2) 00:10:39.560 Could not set queue depth (nvme0n3) 00:10:39.560 Could not set queue depth (nvme0n4) 00:10:39.560 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.560 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.560 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.560 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:39.560 fio-3.35 00:10:39.560 Starting 4 threads 00:10:40.941 00:10:40.941 job0: (groupid=0, jobs=1): err= 0: pid=2392482: Thu Nov 28 18:12:52 2024 00:10:40.941 read: IOPS=170, BW=681KiB/s (697kB/s)(684KiB/1005msec) 00:10:40.941 slat (nsec): min=5618, max=35303, avg=9012.08, stdev=7175.95 00:10:40.941 clat (usec): min=236, max=42040, avg=4835.92, stdev=12960.81 00:10:40.941 lat (usec): min=242, max=42058, avg=4844.93, stdev=12965.92 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:10:40.941 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 269], 00:10:40.941 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[41157], 95.00th=[41157], 00:10:40.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:40.941 | 99.99th=[42206] 00:10:40.941 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:40.941 slat (nsec): min=6652, max=87524, avg=18213.59, stdev=10952.07 00:10:40.941 clat (usec): min=186, max=1034, avg=320.04, stdev=93.22 00:10:40.941 lat (usec): min=194, max=1050, avg=338.25, stdev=94.54 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 241], 00:10:40.941 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 330], 00:10:40.941 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 420], 95.00th=[ 457], 00:10:40.941 | 99.00th=[ 603], 99.50th=[ 783], 99.90th=[ 1037], 99.95th=[ 1037], 00:10:40.941 | 99.99th=[ 1037] 00:10:40.941 bw ( KiB/s): min= 4087, max= 4087, per=31.58%, avg=4087.00, stdev= 0.00, samples=1 00:10:40.941 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:40.941 lat (usec) : 250=21.96%, 500=73.65%, 750=1.17%, 1000=0.29% 00:10:40.941 lat (msec) : 2=0.15%, 50=2.78% 00:10:40.941 cpu : usr=0.90%, sys=1.10%, ctx=685, majf=0, minf=1 00:10:40.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 issued rwts: total=171,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.941 job1: (groupid=0, jobs=1): err= 0: pid=2392483: Thu Nov 28 18:12:52 2024 00:10:40.941 read: IOPS=22, BW=91.5KiB/s (93.6kB/s)(92.0KiB/1006msec) 00:10:40.941 slat (nsec): min=14758, max=54783, avg=27038.91, stdev=11125.44 00:10:40.941 clat (usec): min=416, max=42052, avg=35977.56, stdev=14083.72 00:10:40.941 lat (usec): min=433, max=42069, avg=36004.60, stdev=14082.94 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 416], 5.00th=[ 437], 10.00th=[ 453], 20.00th=[40633], 00:10:40.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:40.941 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:40.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:40.941 | 99.99th=[42206] 00:10:40.941 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:40.941 slat (nsec): min=7343, max=71649, avg=20308.56, stdev=10406.97 00:10:40.941 clat (usec): min=182, max=642, avg=313.20, stdev=78.08 00:10:40.941 lat (usec): min=192, max=659, avg=333.51, stdev=77.21 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 235], 00:10:40.941 | 30.00th=[ 258], 40.00th=[ 277], 50.00th=[ 314], 60.00th=[ 338], 00:10:40.941 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 412], 95.00th=[ 445], 00:10:40.941 | 99.00th=[ 486], 99.50th=[ 529], 99.90th=[ 644], 99.95th=[ 644], 00:10:40.941 | 99.99th=[ 644] 00:10:40.941 bw ( KiB/s): min= 4087, max= 4087, per=31.58%, avg=4087.00, stdev= 0.00, samples=1 00:10:40.941 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:40.941 lat (usec) : 250=25.42%, 500=69.91%, 750=0.93% 00:10:40.941 lat (msec) : 50=3.74% 00:10:40.941 cpu : usr=0.70%, sys=0.80%, ctx=537, majf=0, minf=1 00:10:40.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.941 job2: (groupid=0, jobs=1): err= 0: pid=2392484: Thu Nov 28 18:12:52 2024 00:10:40.941 read: IOPS=179, BW=720KiB/s (737kB/s)(724KiB/1006msec) 00:10:40.941 slat (nsec): min=4395, max=35971, avg=8651.55, stdev=8252.27 00:10:40.941 clat (usec): min=220, max=42024, avg=4538.41, stdev=12540.66 00:10:40.941 lat (usec): min=225, max=42040, avg=4547.06, stdev=12547.31 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 225], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:10:40.941 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:40.941 | 70.00th=[ 258], 80.00th=[ 306], 90.00th=[40633], 95.00th=[41157], 00:10:40.941 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:40.941 | 99.99th=[42206] 00:10:40.941 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:40.941 slat (nsec): min=7060, max=64882, avg=21576.54, stdev=10622.59 00:10:40.941 clat (usec): min=175, max=575, avg=320.81, stdev=70.88 00:10:40.941 lat (usec): min=213, max=593, avg=342.39, stdev=67.87 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 233], 20.00th=[ 255], 00:10:40.941 | 30.00th=[ 273], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 338], 00:10:40.941 | 70.00th=[ 351], 80.00th=[ 383], 90.00th=[ 408], 95.00th=[ 441], 00:10:40.941 | 99.00th=[ 506], 99.50th=[ 519], 99.90th=[ 578], 99.95th=[ 578], 00:10:40.941 | 99.99th=[ 578] 00:10:40.941 bw ( KiB/s): min= 4087, max= 4087, per=31.58%, avg=4087.00, stdev= 0.00, samples=1 00:10:40.941 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:40.941 lat (usec) : 250=28.72%, 500=67.53%, 750=1.01% 00:10:40.941 lat (msec) : 50=2.74% 00:10:40.941 cpu : usr=1.00%, sys=0.90%, ctx=694, majf=0, minf=1 00:10:40.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 issued rwts: total=181,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.941 job3: (groupid=0, jobs=1): err= 0: pid=2392485: Thu Nov 28 18:12:52 2024 00:10:40.941 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:40.941 slat (nsec): min=4730, max=49668, avg=14523.46, stdev=5705.60 00:10:40.941 clat (usec): min=225, max=41742, avg=332.00, stdev=1482.04 00:10:40.941 lat (usec): min=233, max=41757, avg=346.52, stdev=1482.11 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 253], 00:10:40.941 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:40.941 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 355], 00:10:40.941 | 99.00th=[ 412], 99.50th=[ 465], 99.90th=[41157], 99.95th=[41681], 00:10:40.941 | 99.99th=[41681] 00:10:40.941 write: IOPS=1717, BW=6869KiB/s (7034kB/s)(6876KiB/1001msec); 0 zone resets 00:10:40.941 slat (nsec): min=6060, max=92089, avg=18213.27, stdev=8667.85 00:10:40.941 clat (usec): min=177, max=589, avg=243.04, stdev=63.82 00:10:40.941 lat (usec): min=185, max=617, avg=261.25, stdev=65.84 00:10:40.941 clat percentiles (usec): 00:10:40.941 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:10:40.941 | 30.00th=[ 202], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:10:40.941 | 70.00th=[ 249], 80.00th=[ 285], 90.00th=[ 334], 95.00th=[ 388], 00:10:40.941 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 586], 99.95th=[ 586], 00:10:40.941 | 99.99th=[ 586] 00:10:40.941 bw ( KiB/s): min= 8175, max= 8175, per=63.16%, avg=8175.00, stdev= 0.00, samples=1 00:10:40.941 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:40.941 lat (usec) : 250=45.78%, 500=53.98%, 750=0.18% 00:10:40.941 lat (msec) : 50=0.06% 00:10:40.941 cpu : usr=3.40%, sys=7.20%, ctx=3257, majf=0, minf=1 00:10:40.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.941 issued rwts: total=1536,1719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.941 00:10:40.941 Run status group 0 (all jobs): 00:10:40.941 READ: bw=7598KiB/s (7781kB/s), 91.5KiB/s-6138KiB/s (93.6kB/s-6285kB/s), io=7644KiB (7827kB), run=1001-1006msec 00:10:40.941 WRITE: bw=12.6MiB/s (13.3MB/s), 2036KiB/s-6869KiB/s (2085kB/s-7034kB/s), io=12.7MiB (13.3MB), run=1001-1006msec 00:10:40.941 00:10:40.941 Disk stats (read/write): 00:10:40.941 nvme0n1: ios=71/512, merge=0/0, ticks=949/153, in_queue=1102, util=98.40% 00:10:40.941 nvme0n2: ios=45/512, merge=0/0, ticks=1623/145, in_queue=1768, util=98.17% 00:10:40.941 nvme0n3: ios=98/512, merge=0/0, ticks=1632/158, in_queue=1790, util=98.33% 00:10:40.941 nvme0n4: ios=1444/1536, merge=0/0, ticks=1321/358, in_queue=1679, util=98.01% 00:10:40.942 18:12:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:40.942 [global] 00:10:40.942 thread=1 00:10:40.942 invalidate=1 00:10:40.942 rw=randwrite 00:10:40.942 time_based=1 00:10:40.942 runtime=1 00:10:40.942 ioengine=libaio 00:10:40.942 direct=1 00:10:40.942 bs=4096 00:10:40.942 iodepth=1 00:10:40.942 norandommap=0 00:10:40.942 numjobs=1 00:10:40.942 00:10:40.942 verify_dump=1 00:10:40.942 verify_backlog=512 00:10:40.942 verify_state_save=0 00:10:40.942 do_verify=1 00:10:40.942 verify=crc32c-intel 00:10:40.942 [job0] 00:10:40.942 filename=/dev/nvme0n1 00:10:40.942 [job1] 00:10:40.942 filename=/dev/nvme0n2 00:10:40.942 [job2] 00:10:40.942 filename=/dev/nvme0n3 00:10:40.942 [job3] 00:10:40.942 filename=/dev/nvme0n4 00:10:40.942 Could not set queue depth (nvme0n1) 00:10:40.942 Could not set queue depth (nvme0n2) 00:10:40.942 Could not set queue depth (nvme0n3) 00:10:40.942 Could not set queue depth (nvme0n4) 00:10:41.201 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.201 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.201 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.201 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.201 fio-3.35 00:10:41.201 Starting 4 threads 00:10:42.584 00:10:42.584 job0: (groupid=0, jobs=1): err= 0: pid=2392720: Thu Nov 28 18:12:54 2024 00:10:42.584 read: IOPS=838, BW=3352KiB/s (3433kB/s)(3456KiB/1031msec) 00:10:42.584 slat (nsec): min=6510, max=43456, avg=12359.16, stdev=5267.15 00:10:42.584 clat (usec): min=229, max=41901, avg=879.88, stdev=4764.45 00:10:42.584 lat (usec): min=239, max=41936, avg=892.24, stdev=4765.01 00:10:42.584 clat percentiles (usec): 00:10:42.584 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:10:42.584 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:10:42.584 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 379], 00:10:42.584 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:42.584 | 99.99th=[41681] 00:10:42.584 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:10:42.584 slat (nsec): min=6726, max=59662, avg=14840.45, stdev=7164.23 00:10:42.584 clat (usec): min=179, max=412, avg=231.57, stdev=26.92 00:10:42.584 lat (usec): min=189, max=423, avg=246.41, stdev=30.60 00:10:42.584 clat percentiles (usec): 00:10:42.584 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 208], 00:10:42.584 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:10:42.584 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:10:42.584 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 412], 00:10:42.584 | 99.99th=[ 412] 00:10:42.584 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=2 00:10:42.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:42.584 lat (usec) : 250=41.26%, 500=58.00%, 750=0.05%, 1000=0.05% 00:10:42.584 lat (msec) : 50=0.64% 00:10:42.584 cpu : usr=1.65%, sys=3.50%, ctx=1889, majf=0, minf=1 00:10:42.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.584 issued rwts: total=864,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.584 job1: (groupid=0, jobs=1): err= 0: pid=2392721: Thu Nov 28 18:12:54 2024 00:10:42.584 read: IOPS=78, BW=316KiB/s (323kB/s)(324KiB/1026msec) 00:10:42.584 slat (nsec): min=8385, max=66840, avg=26030.32, stdev=11560.89 00:10:42.584 clat (usec): min=310, max=41269, avg=10470.29, stdev=17581.10 00:10:42.584 lat (usec): min=344, max=41282, avg=10496.32, stdev=17578.88 00:10:42.584 clat percentiles (usec): 00:10:42.584 | 1.00th=[ 310], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 416], 00:10:42.584 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 510], 60.00th=[ 529], 00:10:42.584 | 70.00th=[ 603], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:10:42.584 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:42.584 | 99.99th=[41157] 00:10:42.584 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:42.584 slat (nsec): min=5916, max=70453, avg=24745.68, stdev=11332.55 00:10:42.584 clat (usec): min=167, max=591, avg=310.36, stdev=73.83 00:10:42.584 lat (usec): min=187, max=631, avg=335.11, stdev=73.60 00:10:42.584 clat percentiles (usec): 00:10:42.584 | 1.00th=[ 192], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 243], 00:10:42.584 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 314], 60.00th=[ 330], 00:10:42.584 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 461], 00:10:42.584 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 594], 00:10:42.584 | 99.99th=[ 594] 00:10:42.584 bw ( KiB/s): min= 4096, max= 4096, per=22.91%, avg=4096.00, stdev= 0.00, samples=1 00:10:42.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:42.584 lat (usec) : 250=20.40%, 500=69.98%, 750=6.24% 00:10:42.584 lat (msec) : 50=3.37% 00:10:42.584 cpu : usr=0.59%, sys=1.56%, ctx=593, majf=0, minf=2 00:10:42.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.584 issued rwts: total=81,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.584 job2: (groupid=0, jobs=1): err= 0: pid=2392722: Thu Nov 28 18:12:54 2024 00:10:42.584 read: IOPS=1154, BW=4620KiB/s (4731kB/s)(4740KiB/1026msec) 00:10:42.584 slat (nsec): min=5386, max=57099, avg=18116.57, stdev=9626.93 00:10:42.584 clat (usec): min=241, max=41919, avg=496.30, stdev=2649.92 00:10:42.584 lat (usec): min=247, max=41952, avg=514.41, stdev=2650.27 00:10:42.584 clat percentiles (usec): 00:10:42.584 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:10:42.584 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 318], 00:10:42.584 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 437], 00:10:42.584 | 99.00th=[ 537], 99.50th=[ 652], 99.90th=[41681], 99.95th=[41681], 00:10:42.584 | 99.99th=[41681] 00:10:42.584 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:10:42.584 slat (nsec): min=6599, max=72081, avg=14837.02, stdev=9543.94 00:10:42.584 clat (usec): min=181, max=433, avg=247.93, stdev=51.52 00:10:42.584 lat (usec): min=189, max=482, avg=262.77, stdev=54.44 00:10:42.584 clat percentiles (usec): 00:10:42.584 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:10:42.584 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 239], 00:10:42.584 | 70.00th=[ 262], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 359], 00:10:42.584 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 429], 99.95th=[ 433], 00:10:42.584 | 99.99th=[ 433] 00:10:42.584 bw ( KiB/s): min= 4096, max= 8192, per=34.37%, avg=6144.00, stdev=2896.31, samples=2 00:10:42.584 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:42.584 lat (usec) : 250=37.49%, 500=61.89%, 750=0.44% 00:10:42.584 lat (msec) : 50=0.18% 00:10:42.584 cpu : usr=2.15%, sys=4.68%, ctx=2721, majf=0, minf=2 00:10:42.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.584 issued rwts: total=1185,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.584 job3: (groupid=0, jobs=1): err= 0: pid=2392723: Thu Nov 28 18:12:54 2024 00:10:42.584 read: IOPS=1009, BW=4039KiB/s (4136kB/s)(4104KiB/1016msec) 00:10:42.584 slat (nsec): min=5900, max=47212, avg=13823.43, stdev=6397.11 00:10:42.584 clat (usec): min=241, max=42410, avg=601.89, stdev=3404.04 00:10:42.584 lat (usec): min=247, max=42430, avg=615.71, stdev=3404.57 00:10:42.584 clat percentiles (usec): 00:10:42.585 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 289], 00:10:42.585 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:10:42.585 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 392], 00:10:42.585 | 99.00th=[ 857], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:42.585 | 99.99th=[42206] 00:10:42.585 write: IOPS=1511, BW=6047KiB/s (6192kB/s)(6144KiB/1016msec); 0 zone resets 00:10:42.585 slat (nsec): min=7700, max=63889, avg=15346.62, stdev=7739.37 00:10:42.585 clat (usec): min=179, max=417, avg=227.81, stdev=25.71 00:10:42.585 lat (usec): min=187, max=428, avg=243.16, stdev=30.31 00:10:42.585 clat percentiles (usec): 00:10:42.585 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:10:42.585 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 233], 00:10:42.585 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 273], 00:10:42.585 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 396], 99.95th=[ 416], 00:10:42.585 | 99.99th=[ 416] 00:10:42.585 bw ( KiB/s): min= 5200, max= 7088, per=34.37%, avg=6144.00, stdev=1335.02, samples=2 00:10:42.585 iops : min= 1300, max= 1772, avg=1536.00, stdev=333.75, samples=2 00:10:42.585 lat (usec) : 250=49.30%, 500=49.96%, 750=0.31%, 1000=0.04% 00:10:42.585 lat (msec) : 2=0.12%, 50=0.27% 00:10:42.585 cpu : usr=2.76%, sys=4.73%, ctx=2564, majf=0, minf=1 00:10:42.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:42.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.585 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.585 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:42.585 00:10:42.585 Run status group 0 (all jobs): 00:10:42.585 READ: bw=12.0MiB/s (12.5MB/s), 316KiB/s-4620KiB/s (323kB/s-4731kB/s), io=12.3MiB (12.9MB), run=1016-1031msec 00:10:42.585 WRITE: bw=17.5MiB/s (18.3MB/s), 1996KiB/s-6047KiB/s (2044kB/s-6192kB/s), io=18.0MiB (18.9MB), run=1016-1031msec 00:10:42.585 00:10:42.585 Disk stats (read/write): 00:10:42.585 nvme0n1: ios=895/1024, merge=0/0, ticks=1478/228, in_queue=1706, util=100.00% 00:10:42.585 nvme0n2: ios=86/512, merge=0/0, ticks=659/148, in_queue=807, util=87.21% 00:10:42.585 nvme0n3: ios=1176/1536, merge=0/0, ticks=357/366, in_queue=723, util=89.06% 00:10:42.585 nvme0n4: ios=1070/1188, merge=0/0, ticks=900/260, in_queue=1160, util=99.58% 00:10:42.585 18:12:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:42.585 [global] 00:10:42.585 thread=1 00:10:42.585 invalidate=1 00:10:42.585 rw=write 00:10:42.585 time_based=1 00:10:42.585 runtime=1 00:10:42.585 ioengine=libaio 00:10:42.585 direct=1 00:10:42.585 bs=4096 00:10:42.585 iodepth=128 00:10:42.585 norandommap=0 00:10:42.585 numjobs=1 00:10:42.585 00:10:42.585 verify_dump=1 00:10:42.585 verify_backlog=512 00:10:42.585 verify_state_save=0 00:10:42.585 do_verify=1 00:10:42.585 verify=crc32c-intel 00:10:42.585 [job0] 00:10:42.585 filename=/dev/nvme0n1 00:10:42.585 [job1] 00:10:42.585 filename=/dev/nvme0n2 00:10:42.585 [job2] 00:10:42.585 filename=/dev/nvme0n3 00:10:42.585 [job3] 00:10:42.585 filename=/dev/nvme0n4 00:10:42.585 Could not set queue depth (nvme0n1) 00:10:42.585 Could not set queue depth (nvme0n2) 00:10:42.585 Could not set queue depth (nvme0n3) 00:10:42.585 Could not set queue depth (nvme0n4) 00:10:42.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.585 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.585 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.585 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.585 fio-3.35 00:10:42.585 Starting 4 threads 00:10:43.963 00:10:43.963 job0: (groupid=0, jobs=1): err= 0: pid=2392949: Thu Nov 28 18:12:55 2024 00:10:43.963 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:10:43.963 slat (usec): min=3, max=29278, avg=165.66, stdev=1332.46 00:10:43.963 clat (usec): min=4294, max=67701, avg=21238.49, stdev=10243.04 00:10:43.963 lat (usec): min=4304, max=71070, avg=21404.15, stdev=10364.70 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[ 5342], 5.00th=[12387], 10.00th=[13435], 20.00th=[13698], 00:10:43.963 | 30.00th=[15533], 40.00th=[16712], 50.00th=[17433], 60.00th=[18220], 00:10:43.963 | 70.00th=[20055], 80.00th=[30016], 90.00th=[38536], 95.00th=[39060], 00:10:43.963 | 99.00th=[59507], 99.50th=[64750], 99.90th=[66323], 99.95th=[67634], 00:10:43.963 | 99.99th=[67634] 00:10:43.963 write: IOPS=3216, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1007msec); 0 zone resets 00:10:43.963 slat (usec): min=4, max=16803, avg=139.43, stdev=770.83 00:10:43.963 clat (usec): min=2091, max=59812, avg=19156.36, stdev=9436.65 00:10:43.963 lat (usec): min=2101, max=59851, avg=19295.79, stdev=9487.83 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[ 3949], 5.00th=[10290], 10.00th=[13173], 20.00th=[13829], 00:10:43.963 | 30.00th=[14877], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:10:43.963 | 70.00th=[18220], 80.00th=[26870], 90.00th=[27919], 95.00th=[40633], 00:10:43.963 | 99.00th=[56361], 99.50th=[57934], 99.90th=[58983], 99.95th=[60031], 00:10:43.963 | 99.99th=[60031] 00:10:43.963 bw ( KiB/s): min= 9864, max=15032, per=22.76%, avg=12448.00, stdev=3654.33, samples=2 00:10:43.963 iops : min= 2466, max= 3758, avg=3112.00, stdev=913.58, samples=2 00:10:43.963 lat (msec) : 4=0.63%, 10=3.11%, 20=67.80%, 50=26.21%, 100=2.25% 00:10:43.963 cpu : usr=4.27%, sys=6.86%, ctx=333, majf=0, minf=1 00:10:43.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:43.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.963 issued rwts: total=3072,3239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.963 job1: (groupid=0, jobs=1): err= 0: pid=2392950: Thu Nov 28 18:12:55 2024 00:10:43.963 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:10:43.963 slat (usec): min=3, max=44765, avg=177.26, stdev=1454.42 00:10:43.963 clat (usec): min=7284, max=61911, avg=21742.85, stdev=10856.19 00:10:43.963 lat (usec): min=7302, max=61925, avg=21920.12, stdev=10926.68 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[11338], 5.00th=[12911], 10.00th=[14091], 20.00th=[14353], 00:10:43.963 | 30.00th=[14877], 40.00th=[17171], 50.00th=[19530], 60.00th=[20055], 00:10:43.963 | 70.00th=[21890], 80.00th=[23462], 90.00th=[35914], 95.00th=[55837], 00:10:43.963 | 99.00th=[55837], 99.50th=[60556], 99.90th=[62129], 99.95th=[62129], 00:10:43.963 | 99.99th=[62129] 00:10:43.963 write: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(11.5MiB/1014msec); 0 zone resets 00:10:43.963 slat (usec): min=4, max=18678, avg=169.86, stdev=1023.97 00:10:43.963 clat (usec): min=3494, max=72195, avg=24707.62, stdev=16287.20 00:10:43.963 lat (usec): min=4457, max=72204, avg=24877.48, stdev=16412.84 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[ 7111], 5.00th=[11207], 10.00th=[12256], 20.00th=[14222], 00:10:43.963 | 30.00th=[15270], 40.00th=[16057], 50.00th=[17695], 60.00th=[21365], 00:10:43.963 | 70.00th=[26870], 80.00th=[27395], 90.00th=[60031], 95.00th=[65274], 00:10:43.963 | 99.00th=[68682], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:10:43.963 | 99.99th=[71828] 00:10:43.963 bw ( KiB/s): min= 9576, max=12976, per=20.61%, avg=11276.00, stdev=2404.16, samples=2 00:10:43.963 iops : min= 2394, max= 3244, avg=2819.00, stdev=601.04, samples=2 00:10:43.963 lat (msec) : 4=0.02%, 10=2.12%, 20=55.37%, 50=32.56%, 100=9.93% 00:10:43.963 cpu : usr=2.67%, sys=5.63%, ctx=250, majf=0, minf=1 00:10:43.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:43.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.963 issued rwts: total=2560,2947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.963 job2: (groupid=0, jobs=1): err= 0: pid=2392951: Thu Nov 28 18:12:55 2024 00:10:43.963 read: IOPS=3894, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1007msec) 00:10:43.963 slat (usec): min=2, max=16578, avg=138.10, stdev=977.14 00:10:43.963 clat (usec): min=5368, max=32541, avg=16998.66, stdev=4805.63 00:10:43.963 lat (usec): min=5376, max=32562, avg=17136.75, stdev=4857.57 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[ 6128], 5.00th=[10814], 10.00th=[13042], 20.00th=[14746], 00:10:43.963 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:10:43.963 | 70.00th=[17695], 80.00th=[20317], 90.00th=[24511], 95.00th=[27132], 00:10:43.963 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32375], 99.95th=[32637], 00:10:43.963 | 99.99th=[32637] 00:10:43.963 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:43.963 slat (usec): min=4, max=12381, avg=101.27, stdev=462.13 00:10:43.963 clat (usec): min=1568, max=32499, avg=14859.61, stdev=3275.90 00:10:43.963 lat (usec): min=1579, max=32509, avg=14960.88, stdev=3321.29 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[ 4178], 5.00th=[ 7046], 10.00th=[ 9634], 20.00th=[13960], 00:10:43.963 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:10:43.963 | 70.00th=[16188], 80.00th=[16581], 90.00th=[16909], 95.00th=[17695], 00:10:43.963 | 99.00th=[20317], 99.50th=[20579], 99.90th=[31589], 99.95th=[32113], 00:10:43.963 | 99.99th=[32375] 00:10:43.963 bw ( KiB/s): min=16384, max=16384, per=29.95%, avg=16384.00, stdev= 0.00, samples=2 00:10:43.963 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:43.963 lat (msec) : 2=0.04%, 4=0.42%, 10=7.27%, 20=81.65%, 50=10.61% 00:10:43.963 cpu : usr=5.37%, sys=9.05%, ctx=510, majf=0, minf=1 00:10:43.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:43.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.963 issued rwts: total=3922,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.963 job3: (groupid=0, jobs=1): err= 0: pid=2392952: Thu Nov 28 18:12:55 2024 00:10:43.963 read: IOPS=3092, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1014msec) 00:10:43.963 slat (usec): min=2, max=27626, avg=162.94, stdev=1246.40 00:10:43.963 clat (usec): min=2834, max=78795, avg=19834.40, stdev=9539.50 00:10:43.963 lat (usec): min=10017, max=78811, avg=19997.34, stdev=9645.30 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[10683], 5.00th=[12518], 10.00th=[14746], 20.00th=[15270], 00:10:43.963 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16188], 60.00th=[16712], 00:10:43.963 | 70.00th=[17957], 80.00th=[20841], 90.00th=[34341], 95.00th=[44827], 00:10:43.963 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[72877], 00:10:43.963 | 99.99th=[79168] 00:10:43.963 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec); 0 zone resets 00:10:43.963 slat (usec): min=4, max=34878, avg=130.34, stdev=936.92 00:10:43.963 clat (usec): min=8948, max=51679, avg=17968.62, stdev=6153.73 00:10:43.963 lat (usec): min=9618, max=51685, avg=18098.96, stdev=6201.75 00:10:43.963 clat percentiles (usec): 00:10:43.963 | 1.00th=[10683], 5.00th=[14222], 10.00th=[14484], 20.00th=[15008], 00:10:43.963 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16319], 60.00th=[16712], 00:10:43.963 | 70.00th=[16909], 80.00th=[17695], 90.00th=[22938], 95.00th=[28705], 00:10:43.963 | 99.00th=[43779], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:10:43.963 | 99.99th=[51643] 00:10:43.963 bw ( KiB/s): min=12288, max=15872, per=25.74%, avg=14080.00, stdev=2534.27, samples=2 00:10:43.963 iops : min= 3072, max= 3968, avg=3520.00, stdev=633.57, samples=2 00:10:43.963 lat (msec) : 4=0.01%, 10=0.24%, 20=82.43%, 50=15.42%, 100=1.90% 00:10:43.963 cpu : usr=3.85%, sys=7.01%, ctx=286, majf=0, minf=1 00:10:43.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:43.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.963 issued rwts: total=3136,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.963 00:10:43.963 Run status group 0 (all jobs): 00:10:43.964 READ: bw=48.9MiB/s (51.3MB/s), 9.86MiB/s-15.2MiB/s (10.3MB/s-16.0MB/s), io=49.6MiB (52.0MB), run=1007-1014msec 00:10:43.964 WRITE: bw=53.4MiB/s (56.0MB/s), 11.4MiB/s-15.9MiB/s (11.9MB/s-16.7MB/s), io=54.2MiB (56.8MB), run=1007-1014msec 00:10:43.964 00:10:43.964 Disk stats (read/write): 00:10:43.964 nvme0n1: ios=2612/2799, merge=0/0, ticks=37055/39396, in_queue=76451, util=89.88% 00:10:43.964 nvme0n2: ios=2100/2255, merge=0/0, ticks=33925/50616, in_queue=84541, util=95.79% 00:10:43.964 nvme0n3: ios=3132/3319, merge=0/0, ticks=51129/48347, in_queue=99476, util=98.16% 00:10:43.964 nvme0n4: ios=2604/2791, merge=0/0, ticks=25955/22542, in_queue=48497, util=98.68% 00:10:43.964 18:12:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:43.964 [global] 00:10:43.964 thread=1 00:10:43.964 invalidate=1 00:10:43.964 rw=randwrite 00:10:43.964 time_based=1 00:10:43.964 runtime=1 00:10:43.964 ioengine=libaio 00:10:43.964 direct=1 00:10:43.964 bs=4096 00:10:43.964 iodepth=128 00:10:43.964 norandommap=0 00:10:43.964 numjobs=1 00:10:43.964 00:10:43.964 verify_dump=1 00:10:43.964 verify_backlog=512 00:10:43.964 verify_state_save=0 00:10:43.964 do_verify=1 00:10:43.964 verify=crc32c-intel 00:10:43.964 [job0] 00:10:43.964 filename=/dev/nvme0n1 00:10:43.964 [job1] 00:10:43.964 filename=/dev/nvme0n2 00:10:43.964 [job2] 00:10:43.964 filename=/dev/nvme0n3 00:10:43.964 [job3] 00:10:43.964 filename=/dev/nvme0n4 00:10:43.964 Could not set queue depth (nvme0n1) 00:10:43.964 Could not set queue depth (nvme0n2) 00:10:43.964 Could not set queue depth (nvme0n3) 00:10:43.964 Could not set queue depth (nvme0n4) 00:10:44.223 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.223 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.223 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.223 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:44.223 fio-3.35 00:10:44.223 Starting 4 threads 00:10:45.602 00:10:45.602 job0: (groupid=0, jobs=1): err= 0: pid=2393300: Thu Nov 28 18:12:57 2024 00:10:45.602 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1020msec) 00:10:45.602 slat (usec): min=2, max=26151, avg=141.32, stdev=1080.78 00:10:45.603 clat (usec): min=1919, max=84401, avg=19699.49, stdev=13826.39 00:10:45.603 lat (usec): min=1924, max=84414, avg=19840.81, stdev=13915.76 00:10:45.603 clat percentiles (usec): 00:10:45.603 | 1.00th=[ 3458], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[11207], 00:10:45.603 | 30.00th=[12518], 40.00th=[13566], 50.00th=[14484], 60.00th=[15664], 00:10:45.603 | 70.00th=[20055], 80.00th=[26870], 90.00th=[36439], 95.00th=[54789], 00:10:45.603 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[84411], 00:10:45.603 | 99.99th=[84411] 00:10:45.603 write: IOPS=3549, BW=13.9MiB/s (14.5MB/s)(14.1MiB/1020msec); 0 zone resets 00:10:45.603 slat (usec): min=3, max=35620, avg=129.72, stdev=1107.72 00:10:45.603 clat (usec): min=3489, max=73965, avg=16217.15, stdev=8984.15 00:10:45.603 lat (usec): min=3494, max=73976, avg=16346.87, stdev=9051.09 00:10:45.603 clat percentiles (usec): 00:10:45.603 | 1.00th=[ 5473], 5.00th=[ 8094], 10.00th=[11207], 20.00th=[12780], 00:10:45.603 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14222], 60.00th=[14484], 00:10:45.603 | 70.00th=[14615], 80.00th=[15401], 90.00th=[23462], 95.00th=[37487], 00:10:45.603 | 99.00th=[47973], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:10:45.603 | 99.99th=[73925] 00:10:45.603 bw ( KiB/s): min=12288, max=16384, per=26.15%, avg=14336.00, stdev=2896.31, samples=2 00:10:45.603 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:45.603 lat (msec) : 2=0.17%, 4=0.71%, 10=8.16%, 20=69.64%, 50=17.84% 00:10:45.603 lat (msec) : 100=3.48% 00:10:45.603 cpu : usr=2.36%, sys=5.50%, ctx=371, majf=0, minf=1 00:10:45.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:45.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.603 issued rwts: total=3584,3620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.603 job1: (groupid=0, jobs=1): err= 0: pid=2393301: Thu Nov 28 18:12:57 2024 00:10:45.603 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:10:45.603 slat (usec): min=2, max=20925, avg=118.67, stdev=932.83 00:10:45.603 clat (usec): min=3660, max=49859, avg=15023.98, stdev=4867.97 00:10:45.603 lat (usec): min=3668, max=49879, avg=15142.65, stdev=4954.52 00:10:45.603 clat percentiles (usec): 00:10:45.603 | 1.00th=[ 5276], 5.00th=[ 8848], 10.00th=[11994], 20.00th=[12649], 00:10:45.603 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13698], 60.00th=[13829], 00:10:45.603 | 70.00th=[15401], 80.00th=[16712], 90.00th=[22152], 95.00th=[27395], 00:10:45.603 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31327], 99.95th=[32900], 00:10:45.603 | 99.99th=[50070] 00:10:45.603 write: IOPS=4944, BW=19.3MiB/s (20.3MB/s)(19.6MiB/1014msec); 0 zone resets 00:10:45.603 slat (usec): min=3, max=14554, avg=71.30, stdev=517.16 00:10:45.603 clat (usec): min=608, max=135968, avg=13394.04, stdev=12085.03 00:10:45.603 lat (usec): min=624, max=135977, avg=13465.35, stdev=12107.20 00:10:45.603 clat percentiles (msec): 00:10:45.603 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 9], 00:10:45.603 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 14], 00:10:45.603 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 16], 95.00th=[ 27], 00:10:45.603 | 99.00th=[ 77], 99.50th=[ 105], 99.90th=[ 129], 99.95th=[ 131], 00:10:45.603 | 99.99th=[ 136] 00:10:45.603 bw ( KiB/s): min=18616, max=20480, per=35.66%, avg=19548.00, stdev=1318.05, samples=2 00:10:45.603 iops : min= 4654, max= 5120, avg=4887.00, stdev=329.51, samples=2 00:10:45.603 lat (usec) : 750=0.04%, 1000=0.09% 00:10:45.603 lat (msec) : 2=0.18%, 4=3.40%, 10=14.34%, 20=72.35%, 50=8.63% 00:10:45.603 lat (msec) : 100=0.61%, 250=0.36% 00:10:45.603 cpu : usr=4.34%, sys=5.23%, ctx=463, majf=0, minf=1 00:10:45.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:45.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.603 issued rwts: total=4096,5014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.603 job2: (groupid=0, jobs=1): err= 0: pid=2393302: Thu Nov 28 18:12:57 2024 00:10:45.603 read: IOPS=2494, BW=9977KiB/s (10.2MB/s)(10.3MiB/1056msec) 00:10:45.603 slat (usec): min=2, max=26095, avg=181.63, stdev=1412.78 00:10:45.603 clat (usec): min=3858, max=65223, avg=25788.00, stdev=12242.84 00:10:45.603 lat (usec): min=3867, max=65236, avg=25969.62, stdev=12340.70 00:10:45.603 clat percentiles (usec): 00:10:45.603 | 1.00th=[ 4686], 5.00th=[11338], 10.00th=[13829], 20.00th=[17433], 00:10:45.603 | 30.00th=[18220], 40.00th=[20841], 50.00th=[22152], 60.00th=[23987], 00:10:45.603 | 70.00th=[27132], 80.00th=[34866], 90.00th=[42206], 95.00th=[56361], 00:10:45.603 | 99.00th=[61604], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:10:45.603 | 99.99th=[65274] 00:10:45.603 write: IOPS=2909, BW=11.4MiB/s (11.9MB/s)(12.0MiB/1056msec); 0 zone resets 00:10:45.603 slat (usec): min=3, max=22021, avg=147.25, stdev=1340.75 00:10:45.603 clat (usec): min=791, max=65235, avg=21318.91, stdev=10387.68 00:10:45.603 lat (usec): min=809, max=69864, avg=21466.16, stdev=10515.98 00:10:45.603 clat percentiles (usec): 00:10:45.603 | 1.00th=[ 2835], 5.00th=[10683], 10.00th=[11600], 20.00th=[14353], 00:10:45.603 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[18744], 00:10:45.603 | 70.00th=[21365], 80.00th=[26608], 90.00th=[38011], 95.00th=[42730], 00:10:45.603 | 99.00th=[57410], 99.50th=[57934], 99.90th=[57934], 99.95th=[65274], 00:10:45.603 | 99.99th=[65274] 00:10:45.603 bw ( KiB/s): min=10800, max=13344, per=22.02%, avg=12072.00, stdev=1798.88, samples=2 00:10:45.603 iops : min= 2700, max= 3336, avg=3018.00, stdev=449.72, samples=2 00:10:45.603 lat (usec) : 1000=0.07% 00:10:45.603 lat (msec) : 4=1.24%, 10=3.17%, 20=49.75%, 50=42.06%, 100=3.70% 00:10:45.603 cpu : usr=2.37%, sys=4.08%, ctx=174, majf=0, minf=1 00:10:45.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:45.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.603 issued rwts: total=2634,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.603 job3: (groupid=0, jobs=1): err= 0: pid=2393303: Thu Nov 28 18:12:57 2024 00:10:45.603 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(10.0MiB/1019msec) 00:10:45.603 slat (usec): min=2, max=44522, avg=180.91, stdev=1479.98 00:10:45.603 clat (usec): min=1886, max=104384, avg=24191.95, stdev=17514.86 00:10:45.603 lat (usec): min=1889, max=104419, avg=24372.86, stdev=17615.42 00:10:45.603 clat percentiles (msec): 00:10:45.603 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 15], 00:10:45.603 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 17], 60.00th=[ 18], 00:10:45.603 | 70.00th=[ 24], 80.00th=[ 37], 90.00th=[ 47], 95.00th=[ 58], 00:10:45.603 | 99.00th=[ 100], 99.50th=[ 100], 99.90th=[ 100], 99.95th=[ 100], 00:10:45.603 | 99.99th=[ 105] 00:10:45.603 write: IOPS=2714, BW=10.6MiB/s (11.1MB/s)(10.8MiB/1019msec); 0 zone resets 00:10:45.603 slat (usec): min=3, max=28895, avg=177.13, stdev=1466.39 00:10:45.603 clat (usec): min=6743, max=73206, avg=23951.20, stdev=11826.59 00:10:45.603 lat (usec): min=6747, max=73251, avg=24128.33, stdev=11937.25 00:10:45.603 clat percentiles (usec): 00:10:45.603 | 1.00th=[ 6783], 5.00th=[11207], 10.00th=[14091], 20.00th=[15533], 00:10:45.603 | 30.00th=[16057], 40.00th=[16450], 50.00th=[17171], 60.00th=[19792], 00:10:45.603 | 70.00th=[33424], 80.00th=[36963], 90.00th=[41157], 95.00th=[48497], 00:10:45.603 | 99.00th=[50070], 99.50th=[50594], 99.90th=[61080], 99.95th=[61080], 00:10:45.603 | 99.99th=[72877] 00:10:45.603 bw ( KiB/s): min= 8816, max=12288, per=19.25%, avg=10552.00, stdev=2455.07, samples=2 00:10:45.603 iops : min= 2204, max= 3072, avg=2638.00, stdev=613.77, samples=2 00:10:45.603 lat (msec) : 2=0.15%, 4=0.54%, 10=4.06%, 20=57.94%, 50=32.99% 00:10:45.603 lat (msec) : 100=4.30%, 250=0.02% 00:10:45.603 cpu : usr=2.36%, sys=3.54%, ctx=221, majf=0, minf=1 00:10:45.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:45.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.603 issued rwts: total=2560,2766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.603 00:10:45.603 Run status group 0 (all jobs): 00:10:45.603 READ: bw=47.6MiB/s (49.9MB/s), 9977KiB/s-15.8MiB/s (10.2MB/s-16.5MB/s), io=50.3MiB (52.7MB), run=1014-1056msec 00:10:45.603 WRITE: bw=53.5MiB/s (56.1MB/s), 10.6MiB/s-19.3MiB/s (11.1MB/s-20.3MB/s), io=56.5MiB (59.3MB), run=1014-1056msec 00:10:45.603 00:10:45.603 Disk stats (read/write): 00:10:45.603 nvme0n1: ios=2834/3072, merge=0/0, ticks=36302/31730, in_queue=68032, util=97.39% 00:10:45.603 nvme0n2: ios=3634/4151, merge=0/0, ticks=52109/53620, in_queue=105729, util=95.33% 00:10:45.603 nvme0n3: ios=2107/2559, merge=0/0, ticks=40093/45384, in_queue=85477, util=98.34% 00:10:45.603 nvme0n4: ios=2183/2560, merge=0/0, ticks=24654/29448, in_queue=54102, util=98.12% 00:10:45.603 18:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:45.603 18:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2393439 00:10:45.603 18:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:45.603 18:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:45.603 [global] 00:10:45.603 thread=1 00:10:45.603 invalidate=1 00:10:45.603 rw=read 00:10:45.603 time_based=1 00:10:45.603 runtime=10 00:10:45.603 ioengine=libaio 00:10:45.603 direct=1 00:10:45.603 bs=4096 00:10:45.603 iodepth=1 00:10:45.603 norandommap=1 00:10:45.603 numjobs=1 00:10:45.603 00:10:45.603 [job0] 00:10:45.603 filename=/dev/nvme0n1 00:10:45.603 [job1] 00:10:45.603 filename=/dev/nvme0n2 00:10:45.603 [job2] 00:10:45.603 filename=/dev/nvme0n3 00:10:45.603 [job3] 00:10:45.603 filename=/dev/nvme0n4 00:10:45.603 Could not set queue depth (nvme0n1) 00:10:45.603 Could not set queue depth (nvme0n2) 00:10:45.603 Could not set queue depth (nvme0n3) 00:10:45.603 Could not set queue depth (nvme0n4) 00:10:45.603 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.603 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.603 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.604 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.604 fio-3.35 00:10:45.604 Starting 4 threads 00:10:48.897 18:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:48.897 18:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:48.897 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7389184, buflen=4096 00:10:48.897 fio: pid=2393537, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.897 18:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.897 18:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:48.897 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=319488, buflen=4096 00:10:48.897 fio: pid=2393536, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.466 18:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.466 18:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:49.466 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=360448, buflen=4096 00:10:49.466 fio: pid=2393533, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.724 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13750272, buflen=4096 00:10:49.724 fio: pid=2393535, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:49.724 18:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.724 18:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:49.724 00:10:49.724 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393533: Thu Nov 28 18:13:01 2024 00:10:49.724 read: IOPS=25, BW=99.4KiB/s (102kB/s)(352KiB/3541msec) 00:10:49.724 slat (nsec): min=8181, max=49077, avg=24985.91, stdev=11000.35 00:10:49.724 clat (usec): min=406, max=85501, avg=39852.40, stdev=9861.91 00:10:49.724 lat (usec): min=426, max=85515, avg=39877.24, stdev=9860.65 00:10:49.724 clat percentiles (usec): 00:10:49.724 | 1.00th=[ 408], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:49.724 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:49.724 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:49.724 | 99.00th=[85459], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:10:49.724 | 99.99th=[85459] 00:10:49.724 bw ( KiB/s): min= 96, max= 112, per=1.82%, avg=101.33, stdev= 6.53, samples=6 00:10:49.724 iops : min= 24, max= 28, avg=25.33, stdev= 1.63, samples=6 00:10:49.724 lat (usec) : 500=4.49% 00:10:49.724 lat (msec) : 50=93.26%, 100=1.12% 00:10:49.724 cpu : usr=0.00%, sys=0.11%, ctx=93, majf=0, minf=2 00:10:49.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.724 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393535: Thu Nov 28 18:13:01 2024 00:10:49.724 read: IOPS=873, BW=3493KiB/s (3577kB/s)(13.1MiB/3844msec) 00:10:49.724 slat (usec): min=4, max=15895, avg=26.52, stdev=400.09 00:10:49.724 clat (usec): min=197, max=42381, avg=1105.65, stdev=5673.78 00:10:49.724 lat (usec): min=202, max=56974, avg=1132.17, stdev=5733.66 00:10:49.724 clat percentiles (usec): 00:10:49.724 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 229], 00:10:49.724 | 30.00th=[ 249], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 302], 00:10:49.724 | 70.00th=[ 318], 80.00th=[ 371], 90.00th=[ 412], 95.00th=[ 490], 00:10:49.724 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:49.724 | 99.99th=[42206] 00:10:49.724 bw ( KiB/s): min= 784, max= 7368, per=65.97%, avg=3657.29, stdev=2470.45, samples=7 00:10:49.724 iops : min= 196, max= 1842, avg=914.29, stdev=617.58, samples=7 00:10:49.724 lat (usec) : 250=30.20%, 500=65.10%, 750=2.65% 00:10:49.724 lat (msec) : 10=0.06%, 50=1.97% 00:10:49.724 cpu : usr=0.73%, sys=1.46%, ctx=3369, majf=0, minf=1 00:10:49.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 issued rwts: total=3358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.724 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393536: Thu Nov 28 18:13:01 2024 00:10:49.724 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(312KiB/3196msec) 00:10:49.724 slat (usec): min=12, max=6913, avg=110.46, stdev=775.27 00:10:49.724 clat (usec): min=553, max=42035, avg=40554.98, stdev=4598.36 00:10:49.724 lat (usec): min=575, max=47985, avg=40666.42, stdev=4674.17 00:10:49.724 clat percentiles (usec): 00:10:49.724 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:49.724 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:49.724 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:49.724 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:49.724 | 99.99th=[42206] 00:10:49.724 bw ( KiB/s): min= 96, max= 104, per=1.77%, avg=98.67, stdev= 4.13, samples=6 00:10:49.724 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:10:49.724 lat (usec) : 750=1.27% 00:10:49.724 lat (msec) : 50=97.47% 00:10:49.724 cpu : usr=0.13%, sys=0.00%, ctx=80, majf=0, minf=2 00:10:49.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.724 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2393537: Thu Nov 28 18:13:01 2024 00:10:49.724 read: IOPS=613, BW=2452KiB/s (2511kB/s)(7216KiB/2943msec) 00:10:49.724 slat (nsec): min=5552, max=63107, avg=19938.37, stdev=10546.29 00:10:49.724 clat (usec): min=240, max=42459, avg=1593.53, stdev=7014.89 00:10:49.724 lat (usec): min=255, max=42492, avg=1613.47, stdev=7014.92 00:10:49.724 clat percentiles (usec): 00:10:49.724 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:10:49.724 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 355], 00:10:49.724 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[ 494], 95.00th=[ 529], 00:10:49.724 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:49.724 | 99.99th=[42206] 00:10:49.724 bw ( KiB/s): min= 776, max= 5664, per=49.56%, avg=2747.20, stdev=1905.03, samples=5 00:10:49.724 iops : min= 194, max= 1416, avg=686.80, stdev=476.26, samples=5 00:10:49.724 lat (usec) : 250=0.44%, 500=90.25%, 750=6.15% 00:10:49.724 lat (msec) : 10=0.06%, 50=3.05% 00:10:49.724 cpu : usr=0.61%, sys=1.36%, ctx=1805, majf=0, minf=1 00:10:49.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.724 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.724 00:10:49.724 Run status group 0 (all jobs): 00:10:49.724 READ: bw=5543KiB/s (5676kB/s), 97.6KiB/s-3493KiB/s (100.0kB/s-3577kB/s), io=20.8MiB (21.8MB), run=2943-3844msec 00:10:49.724 00:10:49.724 Disk stats (read/write): 00:10:49.724 nvme0n1: ios=128/0, merge=0/0, ticks=3654/0, in_queue=3654, util=99.54% 00:10:49.724 nvme0n2: ios=3388/0, merge=0/0, ticks=4394/0, in_queue=4394, util=98.39% 00:10:49.724 nvme0n3: ios=76/0, merge=0/0, ticks=3084/0, in_queue=3084, util=96.60% 00:10:49.724 nvme0n4: ios=1802/0, merge=0/0, ticks=2767/0, in_queue=2767, util=96.75% 00:10:49.982 18:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.982 18:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:50.241 18:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.241 18:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:50.809 18:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:50.809 18:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:51.067 18:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.067 18:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:51.325 18:13:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:51.325 18:13:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2393439 00:10:51.325 18:13:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:51.325 18:13:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.258 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:52.259 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:52.259 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:52.259 nvmf hotplug test: fio failed as expected 00:10:52.259 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.517 rmmod nvme_tcp 00:10:52.517 rmmod nvme_fabrics 00:10:52.517 rmmod nvme_keyring 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2391260 ']' 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2391260 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2391260 ']' 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2391260 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391260 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391260' 00:10:52.517 killing process with pid 2391260 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2391260 00:10:52.517 18:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2391260 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.892 18:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:55.797 00:10:55.797 real 0m27.666s 00:10:55.797 user 1m37.317s 00:10:55.797 sys 0m6.763s 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.797 ************************************ 00:10:55.797 END TEST nvmf_fio_target 00:10:55.797 ************************************ 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.797 ************************************ 00:10:55.797 START TEST nvmf_bdevio 00:10:55.797 ************************************ 00:10:55.797 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.797 * Looking for test storage... 00:10:56.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.055 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.056 --rc genhtml_branch_coverage=1 00:10:56.056 --rc genhtml_function_coverage=1 00:10:56.056 --rc genhtml_legend=1 00:10:56.056 --rc geninfo_all_blocks=1 00:10:56.056 --rc geninfo_unexecuted_blocks=1 00:10:56.056 00:10:56.056 ' 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.056 --rc genhtml_branch_coverage=1 00:10:56.056 --rc genhtml_function_coverage=1 00:10:56.056 --rc genhtml_legend=1 00:10:56.056 --rc geninfo_all_blocks=1 00:10:56.056 --rc geninfo_unexecuted_blocks=1 00:10:56.056 00:10:56.056 ' 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.056 --rc genhtml_branch_coverage=1 00:10:56.056 --rc genhtml_function_coverage=1 00:10:56.056 --rc genhtml_legend=1 00:10:56.056 --rc geninfo_all_blocks=1 00:10:56.056 --rc geninfo_unexecuted_blocks=1 00:10:56.056 00:10:56.056 ' 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:56.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.056 --rc genhtml_branch_coverage=1 00:10:56.056 --rc genhtml_function_coverage=1 00:10:56.056 --rc genhtml_legend=1 00:10:56.056 --rc geninfo_all_blocks=1 00:10:56.056 --rc geninfo_unexecuted_blocks=1 00:10:56.056 00:10:56.056 ' 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.056 18:13:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.056 18:13:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:58.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:58.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.589 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:58.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:58.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:10:58.590 00:10:58.590 --- 10.0.0.2 ping statistics --- 00:10:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.590 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:10:58.590 00:10:58.590 --- 10.0.0.1 ping statistics --- 00:10:58.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.590 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.590 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2396439 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2396439 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2396439 ']' 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.591 18:13:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.591 [2024-11-28 18:13:10.380753] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:58.591 [2024-11-28 18:13:10.380921] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.591 [2024-11-28 18:13:10.544380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.850 [2024-11-28 18:13:10.692136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.850 [2024-11-28 18:13:10.692213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.850 [2024-11-28 18:13:10.692239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.850 [2024-11-28 18:13:10.692262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.850 [2024-11-28 18:13:10.692283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.850 [2024-11-28 18:13:10.695188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:58.850 [2024-11-28 18:13:10.695246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:58.850 [2024-11-28 18:13:10.695296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.850 [2024-11-28 18:13:10.695303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.418 [2024-11-28 18:13:11.402392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.418 Malloc0 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.418 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.677 [2024-11-28 18:13:11.528666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:59.677 { 00:10:59.677 "params": { 00:10:59.677 "name": "Nvme$subsystem", 00:10:59.677 "trtype": "$TEST_TRANSPORT", 00:10:59.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.677 "adrfam": "ipv4", 00:10:59.677 "trsvcid": "$NVMF_PORT", 00:10:59.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.677 "hdgst": ${hdgst:-false}, 00:10:59.677 "ddgst": ${ddgst:-false} 00:10:59.677 }, 00:10:59.677 "method": "bdev_nvme_attach_controller" 00:10:59.677 } 00:10:59.677 EOF 00:10:59.677 )") 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:59.677 18:13:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:59.677 "params": { 00:10:59.677 "name": "Nvme1", 00:10:59.677 "trtype": "tcp", 00:10:59.677 "traddr": "10.0.0.2", 00:10:59.677 "adrfam": "ipv4", 00:10:59.677 "trsvcid": "4420", 00:10:59.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.677 "hdgst": false, 00:10:59.677 "ddgst": false 00:10:59.677 }, 00:10:59.677 "method": "bdev_nvme_attach_controller" 00:10:59.677 }' 00:10:59.677 [2024-11-28 18:13:11.614654] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:59.677 [2024-11-28 18:13:11.614799] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2396594 ] 00:10:59.677 [2024-11-28 18:13:11.752358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.936 [2024-11-28 18:13:11.887586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.936 [2024-11-28 18:13:11.887631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.936 [2024-11-28 18:13:11.887636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.508 I/O targets: 00:11:00.508 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:00.508 00:11:00.508 00:11:00.508 CUnit - A unit testing framework for C - Version 2.1-3 00:11:00.508 http://cunit.sourceforge.net/ 00:11:00.508 00:11:00.508 00:11:00.508 Suite: bdevio tests on: Nvme1n1 00:11:00.508 Test: blockdev write read block ...passed 00:11:00.508 Test: blockdev write zeroes read block ...passed 00:11:00.508 Test: blockdev write zeroes read no split ...passed 00:11:00.508 Test: blockdev write zeroes read split ...passed 00:11:00.508 Test: blockdev write zeroes read split partial ...passed 00:11:00.508 Test: blockdev reset ...[2024-11-28 18:13:12.486827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:00.508 [2024-11-28 18:13:12.487016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:11:00.508 [2024-11-28 18:13:12.502441] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:00.508 passed 00:11:00.508 Test: blockdev write read 8 blocks ...passed 00:11:00.508 Test: blockdev write read size > 128k ...passed 00:11:00.508 Test: blockdev write read invalid size ...passed 00:11:00.508 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.508 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.508 Test: blockdev write read max offset ...passed 00:11:00.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.767 Test: blockdev writev readv 8 blocks ...passed 00:11:00.767 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.767 Test: blockdev writev readv block ...passed 00:11:00.767 Test: blockdev writev readv size > 128k ...passed 00:11:00.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.767 Test: blockdev comparev and writev ...[2024-11-28 18:13:12.757913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.757986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.758025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.758051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.758555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.758589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.758633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.758660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.759142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.759176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.759209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.759233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.759769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.759802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.759834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:00.767 [2024-11-28 18:13:12.759858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:00.767 passed 00:11:00.767 Test: blockdev nvme passthru rw ...passed 00:11:00.767 Test: blockdev nvme passthru vendor specific ...[2024-11-28 18:13:12.844484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:00.767 [2024-11-28 18:13:12.844540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.844802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:00.767 [2024-11-28 18:13:12.844837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.845019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:00.767 [2024-11-28 18:13:12.845052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:00.767 [2024-11-28 18:13:12.845245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:00.767 [2024-11-28 18:13:12.845278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:00.767 passed 00:11:00.767 Test: blockdev nvme admin passthru ...passed 00:11:01.027 Test: blockdev copy ...passed 00:11:01.027 00:11:01.027 Run Summary: Type Total Ran Passed Failed Inactive 00:11:01.027 suites 1 1 n/a 0 0 00:11:01.027 tests 23 23 23 0 0 00:11:01.027 asserts 152 152 152 0 n/a 00:11:01.027 00:11:01.027 Elapsed time = 1.189 seconds 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.964 rmmod nvme_tcp 00:11:01.964 rmmod nvme_fabrics 00:11:01.964 rmmod nvme_keyring 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2396439 ']' 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2396439 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2396439 ']' 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2396439 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396439 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396439' 00:11:01.964 killing process with pid 2396439 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2396439 00:11:01.964 18:13:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2396439 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.344 18:13:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.251 00:11:05.251 real 0m9.363s 00:11:05.251 user 0m21.921s 00:11:05.251 sys 0m2.557s 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 ************************************ 00:11:05.251 END TEST nvmf_bdevio 00:11:05.251 ************************************ 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:05.251 00:11:05.251 real 4m31.927s 00:11:05.251 user 11m55.505s 00:11:05.251 sys 1m10.102s 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 ************************************ 00:11:05.251 END TEST nvmf_target_core 00:11:05.251 ************************************ 00:11:05.251 18:13:17 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.251 18:13:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.251 18:13:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.251 18:13:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:05.251 ************************************ 00:11:05.251 START TEST nvmf_target_extra 00:11:05.251 ************************************ 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:05.251 * Looking for test storage... 00:11:05.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.251 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.510 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.510 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.510 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.510 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.510 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.510 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.511 --rc genhtml_branch_coverage=1 00:11:05.511 --rc genhtml_function_coverage=1 00:11:05.511 --rc genhtml_legend=1 00:11:05.511 --rc geninfo_all_blocks=1 00:11:05.511 --rc geninfo_unexecuted_blocks=1 00:11:05.511 00:11:05.511 ' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.511 --rc genhtml_branch_coverage=1 00:11:05.511 --rc genhtml_function_coverage=1 00:11:05.511 --rc genhtml_legend=1 00:11:05.511 --rc geninfo_all_blocks=1 00:11:05.511 --rc geninfo_unexecuted_blocks=1 00:11:05.511 00:11:05.511 ' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.511 --rc genhtml_branch_coverage=1 00:11:05.511 --rc genhtml_function_coverage=1 00:11:05.511 --rc genhtml_legend=1 00:11:05.511 --rc geninfo_all_blocks=1 00:11:05.511 --rc geninfo_unexecuted_blocks=1 00:11:05.511 00:11:05.511 ' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.511 --rc genhtml_branch_coverage=1 00:11:05.511 --rc genhtml_function_coverage=1 00:11:05.511 --rc genhtml_legend=1 00:11:05.511 --rc geninfo_all_blocks=1 00:11:05.511 --rc geninfo_unexecuted_blocks=1 00:11:05.511 00:11:05.511 ' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.511 ************************************ 00:11:05.511 START TEST nvmf_example 00:11:05.511 ************************************ 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:05.511 * Looking for test storage... 00:11:05.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:05.511 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.772 --rc genhtml_branch_coverage=1 00:11:05.772 --rc genhtml_function_coverage=1 00:11:05.772 --rc genhtml_legend=1 00:11:05.772 --rc geninfo_all_blocks=1 00:11:05.772 --rc geninfo_unexecuted_blocks=1 00:11:05.772 00:11:05.772 ' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.772 --rc genhtml_branch_coverage=1 00:11:05.772 --rc genhtml_function_coverage=1 00:11:05.772 --rc genhtml_legend=1 00:11:05.772 --rc geninfo_all_blocks=1 00:11:05.772 --rc geninfo_unexecuted_blocks=1 00:11:05.772 00:11:05.772 ' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.772 --rc genhtml_branch_coverage=1 00:11:05.772 --rc genhtml_function_coverage=1 00:11:05.772 --rc genhtml_legend=1 00:11:05.772 --rc geninfo_all_blocks=1 00:11:05.772 --rc geninfo_unexecuted_blocks=1 00:11:05.772 00:11:05.772 ' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:05.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.772 --rc genhtml_branch_coverage=1 00:11:05.772 --rc genhtml_function_coverage=1 00:11:05.772 --rc genhtml_legend=1 00:11:05.772 --rc geninfo_all_blocks=1 00:11:05.772 --rc geninfo_unexecuted_blocks=1 00:11:05.772 00:11:05.772 ' 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.772 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.773 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.679 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:07.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:07.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:07.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:07.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.680 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:11:07.939 00:11:07.939 --- 10.0.0.2 ping statistics --- 00:11:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.939 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:07.939 00:11:07.939 --- 10.0.0.1 ping statistics --- 00:11:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.939 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2398998 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2398998 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2398998 ']' 00:11:07.939 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.940 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.940 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.940 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.940 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.913 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:09.197 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:21.413 Initializing NVMe Controllers 00:11:21.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:21.413 Initialization complete. Launching workers. 00:11:21.413 ======================================================== 00:11:21.413 Latency(us) 00:11:21.413 Device Information : IOPS MiB/s Average min max 00:11:21.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11420.20 44.61 5606.08 1334.71 16509.31 00:11:21.413 ======================================================== 00:11:21.413 Total : 11420.20 44.61 5606.08 1334.71 16509.31 00:11:21.413 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:21.413 rmmod nvme_tcp 00:11:21.413 rmmod nvme_fabrics 00:11:21.413 rmmod nvme_keyring 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2398998 ']' 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2398998 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2398998 ']' 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2398998 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398998 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398998' 00:11:21.413 killing process with pid 2398998 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2398998 00:11:21.413 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2398998 00:11:21.413 nvmf threads initialize successfully 00:11:21.413 bdev subsystem init successfully 00:11:21.413 created a nvmf target service 00:11:21.413 create targets's poll groups done 00:11:21.413 all subsystems of target started 00:11:21.413 nvmf target is running 00:11:21.413 all subsystems of target stopped 00:11:21.413 destroy targets's poll groups done 00:11:21.413 destroyed the nvmf target service 00:11:21.413 bdev subsystem finish successfully 00:11:21.413 nvmf threads destroy successfully 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.413 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.794 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.794 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:22.794 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.794 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.794 00:11:22.794 real 0m17.410s 00:11:22.794 user 0m48.467s 00:11:22.794 sys 0m3.803s 00:11:22.794 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.794 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.794 ************************************ 00:11:22.794 END TEST nvmf_example 00:11:22.794 ************************************ 00:11:23.056 18:13:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.057 ************************************ 00:11:23.057 START TEST nvmf_filesystem 00:11:23.057 ************************************ 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:23.057 * Looking for test storage... 00:11:23.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.057 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.057 --rc genhtml_branch_coverage=1 00:11:23.057 --rc genhtml_function_coverage=1 00:11:23.057 --rc genhtml_legend=1 00:11:23.057 --rc geninfo_all_blocks=1 00:11:23.057 --rc geninfo_unexecuted_blocks=1 00:11:23.057 00:11:23.057 ' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.057 --rc genhtml_branch_coverage=1 00:11:23.057 --rc genhtml_function_coverage=1 00:11:23.057 --rc genhtml_legend=1 00:11:23.057 --rc geninfo_all_blocks=1 00:11:23.057 --rc geninfo_unexecuted_blocks=1 00:11:23.057 00:11:23.057 ' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.057 --rc genhtml_branch_coverage=1 00:11:23.057 --rc genhtml_function_coverage=1 00:11:23.057 --rc genhtml_legend=1 00:11:23.057 --rc geninfo_all_blocks=1 00:11:23.057 --rc geninfo_unexecuted_blocks=1 00:11:23.057 00:11:23.057 ' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.057 --rc genhtml_branch_coverage=1 00:11:23.057 --rc genhtml_function_coverage=1 00:11:23.057 --rc genhtml_legend=1 00:11:23.057 --rc geninfo_all_blocks=1 00:11:23.057 --rc geninfo_unexecuted_blocks=1 00:11:23.057 00:11:23.057 ' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:23.057 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:23.058 #define SPDK_CONFIG_H 00:11:23.058 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:23.058 #define SPDK_CONFIG_APPS 1 00:11:23.058 #define SPDK_CONFIG_ARCH native 00:11:23.058 #define SPDK_CONFIG_ASAN 1 00:11:23.058 #undef SPDK_CONFIG_AVAHI 00:11:23.058 #undef SPDK_CONFIG_CET 00:11:23.058 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:23.058 #define SPDK_CONFIG_COVERAGE 1 00:11:23.058 #define SPDK_CONFIG_CROSS_PREFIX 00:11:23.058 #undef SPDK_CONFIG_CRYPTO 00:11:23.058 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:23.058 #undef SPDK_CONFIG_CUSTOMOCF 00:11:23.058 #undef SPDK_CONFIG_DAOS 00:11:23.058 #define SPDK_CONFIG_DAOS_DIR 00:11:23.058 #define SPDK_CONFIG_DEBUG 1 00:11:23.058 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:23.058 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:23.058 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:23.058 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:23.058 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:23.058 #undef SPDK_CONFIG_DPDK_UADK 00:11:23.058 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:23.058 #define SPDK_CONFIG_EXAMPLES 1 00:11:23.058 #undef SPDK_CONFIG_FC 00:11:23.058 #define SPDK_CONFIG_FC_PATH 00:11:23.058 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:23.058 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:23.058 #define SPDK_CONFIG_FSDEV 1 00:11:23.058 #undef SPDK_CONFIG_FUSE 00:11:23.058 #undef SPDK_CONFIG_FUZZER 00:11:23.058 #define SPDK_CONFIG_FUZZER_LIB 00:11:23.058 #undef SPDK_CONFIG_GOLANG 00:11:23.058 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:23.058 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:23.058 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:23.058 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:23.058 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:23.058 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:23.058 #undef SPDK_CONFIG_HAVE_LZ4 00:11:23.058 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:23.058 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:23.058 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:23.058 #define SPDK_CONFIG_IDXD 1 00:11:23.058 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:23.058 #undef SPDK_CONFIG_IPSEC_MB 00:11:23.058 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:23.058 #define SPDK_CONFIG_ISAL 1 00:11:23.058 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:23.058 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:23.058 #define SPDK_CONFIG_LIBDIR 00:11:23.058 #undef SPDK_CONFIG_LTO 00:11:23.058 #define SPDK_CONFIG_MAX_LCORES 128 00:11:23.058 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:23.058 #define SPDK_CONFIG_NVME_CUSE 1 00:11:23.058 #undef SPDK_CONFIG_OCF 00:11:23.058 #define SPDK_CONFIG_OCF_PATH 00:11:23.058 #define SPDK_CONFIG_OPENSSL_PATH 00:11:23.058 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:23.058 #define SPDK_CONFIG_PGO_DIR 00:11:23.058 #undef SPDK_CONFIG_PGO_USE 00:11:23.058 #define SPDK_CONFIG_PREFIX /usr/local 00:11:23.058 #undef SPDK_CONFIG_RAID5F 00:11:23.058 #undef SPDK_CONFIG_RBD 00:11:23.058 #define SPDK_CONFIG_RDMA 1 00:11:23.058 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:23.058 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:23.058 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:23.058 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:23.058 #define SPDK_CONFIG_SHARED 1 00:11:23.058 #undef SPDK_CONFIG_SMA 00:11:23.058 #define SPDK_CONFIG_TESTS 1 00:11:23.058 #undef SPDK_CONFIG_TSAN 00:11:23.058 #define SPDK_CONFIG_UBLK 1 00:11:23.058 #define SPDK_CONFIG_UBSAN 1 00:11:23.058 #undef SPDK_CONFIG_UNIT_TESTS 00:11:23.058 #undef SPDK_CONFIG_URING 00:11:23.058 #define SPDK_CONFIG_URING_PATH 00:11:23.058 #undef SPDK_CONFIG_URING_ZNS 00:11:23.058 #undef SPDK_CONFIG_USDT 00:11:23.058 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:23.058 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:23.058 #undef SPDK_CONFIG_VFIO_USER 00:11:23.058 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:23.058 #define SPDK_CONFIG_VHOST 1 00:11:23.058 #define SPDK_CONFIG_VIRTIO 1 00:11:23.058 #undef SPDK_CONFIG_VTUNE 00:11:23.058 #define SPDK_CONFIG_VTUNE_DIR 00:11:23.058 #define SPDK_CONFIG_WERROR 1 00:11:23.058 #define SPDK_CONFIG_WPDK_DIR 00:11:23.058 #undef SPDK_CONFIG_XNVME 00:11:23.058 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:23.058 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2400903 ]] 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2400903 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:23.059 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.k7KUMy 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.k7KUMy/tests/target /tmp/spdk.k7KUMy 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55112826880 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6875693056 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:23.060 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375269376 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22437888 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993870848 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=389120 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:23.319 * Looking for test storage... 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55112826880 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9090285568 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.319 --rc genhtml_branch_coverage=1 00:11:23.319 --rc genhtml_function_coverage=1 00:11:23.319 --rc genhtml_legend=1 00:11:23.319 --rc geninfo_all_blocks=1 00:11:23.319 --rc geninfo_unexecuted_blocks=1 00:11:23.319 00:11:23.319 ' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.319 --rc genhtml_branch_coverage=1 00:11:23.319 --rc genhtml_function_coverage=1 00:11:23.319 --rc genhtml_legend=1 00:11:23.319 --rc geninfo_all_blocks=1 00:11:23.319 --rc geninfo_unexecuted_blocks=1 00:11:23.319 00:11:23.319 ' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.319 --rc genhtml_branch_coverage=1 00:11:23.319 --rc genhtml_function_coverage=1 00:11:23.319 --rc genhtml_legend=1 00:11:23.319 --rc geninfo_all_blocks=1 00:11:23.319 --rc geninfo_unexecuted_blocks=1 00:11:23.319 00:11:23.319 ' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.319 --rc genhtml_branch_coverage=1 00:11:23.319 --rc genhtml_function_coverage=1 00:11:23.319 --rc genhtml_legend=1 00:11:23.319 --rc geninfo_all_blocks=1 00:11:23.319 --rc geninfo_unexecuted_blocks=1 00:11:23.319 00:11:23.319 ' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:23.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:23.319 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.226 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:25.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:25.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:25.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:25.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.227 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:25.486 00:11:25.486 --- 10.0.0.2 ping statistics --- 00:11:25.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.486 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:25.486 00:11:25.486 --- 10.0.0.1 ping statistics --- 00:11:25.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.486 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.486 ************************************ 00:11:25.486 START TEST nvmf_filesystem_no_in_capsule 00:11:25.486 ************************************ 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2402592 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2402592 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2402592 ']' 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.486 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.747 [2024-11-28 18:13:37.588818] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:25.747 [2024-11-28 18:13:37.588970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.747 [2024-11-28 18:13:37.737336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.007 [2024-11-28 18:13:37.877626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.007 [2024-11-28 18:13:37.877705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.007 [2024-11-28 18:13:37.877730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.007 [2024-11-28 18:13:37.877754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.007 [2024-11-28 18:13:37.877774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.007 [2024-11-28 18:13:37.880627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.007 [2024-11-28 18:13:37.880697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.007 [2024-11-28 18:13:37.880787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.007 [2024-11-28 18:13:37.880793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.574 [2024-11-28 18:13:38.626478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.574 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.575 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:26.575 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.575 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 Malloc1 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 [2024-11-28 18:13:39.208694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:27.141 { 00:11:27.141 "name": "Malloc1", 00:11:27.141 "aliases": [ 00:11:27.141 "f5c6861b-a49e-4eca-9171-71894fb16cbe" 00:11:27.141 ], 00:11:27.141 "product_name": "Malloc disk", 00:11:27.141 "block_size": 512, 00:11:27.141 "num_blocks": 1048576, 00:11:27.141 "uuid": "f5c6861b-a49e-4eca-9171-71894fb16cbe", 00:11:27.141 "assigned_rate_limits": { 00:11:27.141 "rw_ios_per_sec": 0, 00:11:27.141 "rw_mbytes_per_sec": 0, 00:11:27.141 "r_mbytes_per_sec": 0, 00:11:27.141 "w_mbytes_per_sec": 0 00:11:27.141 }, 00:11:27.141 "claimed": true, 00:11:27.141 "claim_type": "exclusive_write", 00:11:27.141 "zoned": false, 00:11:27.141 "supported_io_types": { 00:11:27.141 "read": true, 00:11:27.141 "write": true, 00:11:27.141 "unmap": true, 00:11:27.141 "flush": true, 00:11:27.141 "reset": true, 00:11:27.141 "nvme_admin": false, 00:11:27.141 "nvme_io": false, 00:11:27.141 "nvme_io_md": false, 00:11:27.141 "write_zeroes": true, 00:11:27.141 "zcopy": true, 00:11:27.141 "get_zone_info": false, 00:11:27.141 "zone_management": false, 00:11:27.141 "zone_append": false, 00:11:27.141 "compare": false, 00:11:27.141 "compare_and_write": false, 00:11:27.141 "abort": true, 00:11:27.141 "seek_hole": false, 00:11:27.141 "seek_data": false, 00:11:27.141 "copy": true, 00:11:27.141 "nvme_iov_md": false 00:11:27.141 }, 00:11:27.141 "memory_domains": [ 00:11:27.141 { 00:11:27.141 "dma_device_id": "system", 00:11:27.141 "dma_device_type": 1 00:11:27.141 }, 00:11:27.141 { 00:11:27.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.141 "dma_device_type": 2 00:11:27.141 } 00:11:27.141 ], 00:11:27.141 "driver_specific": {} 00:11:27.141 } 00:11:27.141 ]' 00:11:27.141 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:27.400 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.968 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.968 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.968 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.968 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.968 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:29.873 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:30.131 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:30.700 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:31.638 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:31.638 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:31.638 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.897 ************************************ 00:11:31.897 START TEST filesystem_ext4 00:11:31.897 ************************************ 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:31.897 18:13:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:31.897 mke2fs 1.47.0 (5-Feb-2023) 00:11:31.897 Discarding device blocks: 0/522240 done 00:11:31.897 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:31.897 Filesystem UUID: 50fdafba-8920-4b54-aa00-b9d52a29166d 00:11:31.897 Superblock backups stored on blocks: 00:11:31.897 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:31.897 00:11:31.897 Allocating group tables: 0/64 done 00:11:31.897 Writing inode tables: 0/64 done 00:11:31.897 Creating journal (8192 blocks): done 00:11:32.156 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:11:32.156 00:11:32.156 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:32.156 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.722 18:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.722 18:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:38.722 18:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.722 18:13:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2402592 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.722 00:11:38.722 real 0m6.286s 00:11:38.722 user 0m0.022s 00:11:38.722 sys 0m0.061s 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 ************************************ 00:11:38.722 END TEST filesystem_ext4 00:11:38.722 ************************************ 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 ************************************ 00:11:38.722 START TEST filesystem_btrfs 00:11:38.722 ************************************ 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:38.722 btrfs-progs v6.8.1 00:11:38.722 See https://btrfs.readthedocs.io for more information. 00:11:38.722 00:11:38.722 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:38.722 NOTE: several default settings have changed in version 5.15, please make sure 00:11:38.722 this does not affect your deployments: 00:11:38.722 - DUP for metadata (-m dup) 00:11:38.722 - enabled no-holes (-O no-holes) 00:11:38.722 - enabled free-space-tree (-R free-space-tree) 00:11:38.722 00:11:38.722 Label: (null) 00:11:38.722 UUID: bc929ea3-c9f4-4a8e-a7a5-f761c69d2b67 00:11:38.722 Node size: 16384 00:11:38.722 Sector size: 4096 (CPU page size: 4096) 00:11:38.722 Filesystem size: 510.00MiB 00:11:38.722 Block group profiles: 00:11:38.722 Data: single 8.00MiB 00:11:38.722 Metadata: DUP 32.00MiB 00:11:38.722 System: DUP 8.00MiB 00:11:38.722 SSD detected: yes 00:11:38.722 Zoned device: no 00:11:38.722 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:38.722 Checksum: crc32c 00:11:38.722 Number of devices: 1 00:11:38.722 Devices: 00:11:38.722 ID SIZE PATH 00:11:38.722 1 510.00MiB /dev/nvme0n1p1 00:11:38.722 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2402592 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.722 00:11:38.722 real 0m0.687s 00:11:38.722 user 0m0.010s 00:11:38.722 sys 0m0.111s 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.722 ************************************ 00:11:38.722 END TEST filesystem_btrfs 00:11:38.722 ************************************ 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.722 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.980 ************************************ 00:11:38.980 START TEST filesystem_xfs 00:11:38.980 ************************************ 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:38.980 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:38.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:38.980 = sectsz=512 attr=2, projid32bit=1 00:11:38.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:38.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:38.980 data = bsize=4096 blocks=130560, imaxpct=25 00:11:38.980 = sunit=0 swidth=0 blks 00:11:38.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:38.980 log =internal log bsize=4096 blocks=16384, version=2 00:11:38.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:38.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:39.917 Discarding blocks...Done. 00:11:39.917 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:39.917 18:13:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2402592 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.822 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.823 00:11:41.823 real 0m2.676s 00:11:41.823 user 0m0.017s 00:11:41.823 sys 0m0.063s 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.823 ************************************ 00:11:41.823 END TEST filesystem_xfs 00:11:41.823 ************************************ 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2402592 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2402592 ']' 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2402592 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2402592 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2402592' 00:11:41.823 killing process with pid 2402592 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2402592 00:11:41.823 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2402592 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:44.358 00:11:44.358 real 0m18.622s 00:11:44.358 user 1m10.312s 00:11:44.358 sys 0m2.501s 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.358 ************************************ 00:11:44.358 END TEST nvmf_filesystem_no_in_capsule 00:11:44.358 ************************************ 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.358 ************************************ 00:11:44.358 START TEST nvmf_filesystem_in_capsule 00:11:44.358 ************************************ 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2404963 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2404963 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2404963 ']' 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.358 18:13:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.358 [2024-11-28 18:13:56.262317] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:44.358 [2024-11-28 18:13:56.262492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.358 [2024-11-28 18:13:56.418248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.617 [2024-11-28 18:13:56.544374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.617 [2024-11-28 18:13:56.544458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.617 [2024-11-28 18:13:56.544479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.617 [2024-11-28 18:13:56.544499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.617 [2024-11-28 18:13:56.544516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.617 [2024-11-28 18:13:56.547048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.617 [2024-11-28 18:13:56.547120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.617 [2024-11-28 18:13:56.547159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.617 [2024-11-28 18:13:56.547177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.182 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.182 [2024-11-28 18:13:57.276470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.441 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.441 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:45.441 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.441 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.007 Malloc1 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.007 [2024-11-28 18:13:57.881933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:46.007 { 00:11:46.007 "name": "Malloc1", 00:11:46.007 "aliases": [ 00:11:46.007 "7a6ad08c-0ff8-49fe-a7ad-6cc777612083" 00:11:46.007 ], 00:11:46.007 "product_name": "Malloc disk", 00:11:46.007 "block_size": 512, 00:11:46.007 "num_blocks": 1048576, 00:11:46.007 "uuid": "7a6ad08c-0ff8-49fe-a7ad-6cc777612083", 00:11:46.007 "assigned_rate_limits": { 00:11:46.007 "rw_ios_per_sec": 0, 00:11:46.007 "rw_mbytes_per_sec": 0, 00:11:46.007 "r_mbytes_per_sec": 0, 00:11:46.007 "w_mbytes_per_sec": 0 00:11:46.007 }, 00:11:46.007 "claimed": true, 00:11:46.007 "claim_type": "exclusive_write", 00:11:46.007 "zoned": false, 00:11:46.007 "supported_io_types": { 00:11:46.007 "read": true, 00:11:46.007 "write": true, 00:11:46.007 "unmap": true, 00:11:46.007 "flush": true, 00:11:46.007 "reset": true, 00:11:46.007 "nvme_admin": false, 00:11:46.007 "nvme_io": false, 00:11:46.007 "nvme_io_md": false, 00:11:46.007 "write_zeroes": true, 00:11:46.007 "zcopy": true, 00:11:46.007 "get_zone_info": false, 00:11:46.007 "zone_management": false, 00:11:46.007 "zone_append": false, 00:11:46.007 "compare": false, 00:11:46.007 "compare_and_write": false, 00:11:46.007 "abort": true, 00:11:46.007 "seek_hole": false, 00:11:46.007 "seek_data": false, 00:11:46.007 "copy": true, 00:11:46.007 "nvme_iov_md": false 00:11:46.007 }, 00:11:46.007 "memory_domains": [ 00:11:46.007 { 00:11:46.007 "dma_device_id": "system", 00:11:46.007 "dma_device_type": 1 00:11:46.007 }, 00:11:46.007 { 00:11:46.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.007 "dma_device_type": 2 00:11:46.007 } 00:11:46.007 ], 00:11:46.007 "driver_specific": {} 00:11:46.007 } 00:11:46.007 ]' 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:46.007 18:13:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.574 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.574 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.574 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.574 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:46.574 18:13:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.476 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:48.734 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:48.994 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:49.254 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.191 ************************************ 00:11:50.191 START TEST filesystem_in_capsule_ext4 00:11:50.191 ************************************ 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:50.191 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:50.191 mke2fs 1.47.0 (5-Feb-2023) 00:11:50.451 Discarding device blocks: 0/522240 done 00:11:50.451 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.451 Filesystem UUID: 590bec0e-8e72-41ff-b16c-20a5f1dce583 00:11:50.451 Superblock backups stored on blocks: 00:11:50.451 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.451 00:11:50.451 Allocating group tables: 0/64 done 00:11:50.451 Writing inode tables: 0/64 done 00:11:51.832 Creating journal (8192 blocks): done 00:11:51.832 Writing superblocks and filesystem accounting information: 0/64 done 00:11:51.832 00:11:51.832 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:51.832 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2404963 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.149 00:11:57.149 real 0m6.781s 00:11:57.149 user 0m0.026s 00:11:57.149 sys 0m0.047s 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:57.149 ************************************ 00:11:57.149 END TEST filesystem_in_capsule_ext4 00:11:57.149 ************************************ 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.149 18:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.149 ************************************ 00:11:57.149 START TEST filesystem_in_capsule_btrfs 00:11:57.149 ************************************ 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:57.149 btrfs-progs v6.8.1 00:11:57.149 See https://btrfs.readthedocs.io for more information. 00:11:57.149 00:11:57.149 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:57.149 NOTE: several default settings have changed in version 5.15, please make sure 00:11:57.149 this does not affect your deployments: 00:11:57.149 - DUP for metadata (-m dup) 00:11:57.149 - enabled no-holes (-O no-holes) 00:11:57.149 - enabled free-space-tree (-R free-space-tree) 00:11:57.149 00:11:57.149 Label: (null) 00:11:57.149 UUID: ab37abec-728e-48f5-abcd-453afc6db77c 00:11:57.149 Node size: 16384 00:11:57.149 Sector size: 4096 (CPU page size: 4096) 00:11:57.149 Filesystem size: 510.00MiB 00:11:57.149 Block group profiles: 00:11:57.149 Data: single 8.00MiB 00:11:57.149 Metadata: DUP 32.00MiB 00:11:57.149 System: DUP 8.00MiB 00:11:57.149 SSD detected: yes 00:11:57.149 Zoned device: no 00:11:57.149 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:57.149 Checksum: crc32c 00:11:57.149 Number of devices: 1 00:11:57.149 Devices: 00:11:57.149 ID SIZE PATH 00:11:57.149 1 510.00MiB /dev/nvme0n1p1 00:11:57.149 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:57.149 18:14:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2404963 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.088 00:11:58.088 real 0m1.081s 00:11:58.088 user 0m0.017s 00:11:58.088 sys 0m0.101s 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.088 ************************************ 00:11:58.088 END TEST filesystem_in_capsule_btrfs 00:11:58.088 ************************************ 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.088 ************************************ 00:11:58.088 START TEST filesystem_in_capsule_xfs 00:11:58.088 ************************************ 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:58.088 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:58.347 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:58.347 = sectsz=512 attr=2, projid32bit=1 00:11:58.347 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:58.347 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:58.347 data = bsize=4096 blocks=130560, imaxpct=25 00:11:58.347 = sunit=0 swidth=0 blks 00:11:58.347 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:58.347 log =internal log bsize=4096 blocks=16384, version=2 00:11:58.347 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:58.347 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:59.285 Discarding blocks...Done. 00:11:59.285 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:59.285 18:14:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.186 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2404963 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.446 00:12:01.446 real 0m3.288s 00:12:01.446 user 0m0.014s 00:12:01.446 sys 0m0.059s 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.446 ************************************ 00:12:01.446 END TEST filesystem_in_capsule_xfs 00:12:01.446 ************************************ 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:01.446 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2404963 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2404963 ']' 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2404963 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2404963 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2404963' 00:12:01.706 killing process with pid 2404963 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2404963 00:12:01.706 18:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2404963 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:04.245 00:12:04.245 real 0m19.959s 00:12:04.245 user 1m15.749s 00:12:04.245 sys 0m2.411s 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.245 ************************************ 00:12:04.245 END TEST nvmf_filesystem_in_capsule 00:12:04.245 ************************************ 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.245 rmmod nvme_tcp 00:12:04.245 rmmod nvme_fabrics 00:12:04.245 rmmod nvme_keyring 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.245 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.784 00:12:06.784 real 0m43.324s 00:12:06.784 user 2m27.128s 00:12:06.784 sys 0m6.592s 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.784 ************************************ 00:12:06.784 END TEST nvmf_filesystem 00:12:06.784 ************************************ 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.784 ************************************ 00:12:06.784 START TEST nvmf_target_discovery 00:12:06.784 ************************************ 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:06.784 * Looking for test storage... 00:12:06.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.784 --rc genhtml_branch_coverage=1 00:12:06.784 --rc genhtml_function_coverage=1 00:12:06.784 --rc genhtml_legend=1 00:12:06.784 --rc geninfo_all_blocks=1 00:12:06.784 --rc geninfo_unexecuted_blocks=1 00:12:06.784 00:12:06.784 ' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.784 --rc genhtml_branch_coverage=1 00:12:06.784 --rc genhtml_function_coverage=1 00:12:06.784 --rc genhtml_legend=1 00:12:06.784 --rc geninfo_all_blocks=1 00:12:06.784 --rc geninfo_unexecuted_blocks=1 00:12:06.784 00:12:06.784 ' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.784 --rc genhtml_branch_coverage=1 00:12:06.784 --rc genhtml_function_coverage=1 00:12:06.784 --rc genhtml_legend=1 00:12:06.784 --rc geninfo_all_blocks=1 00:12:06.784 --rc geninfo_unexecuted_blocks=1 00:12:06.784 00:12:06.784 ' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:06.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.784 --rc genhtml_branch_coverage=1 00:12:06.784 --rc genhtml_function_coverage=1 00:12:06.784 --rc genhtml_legend=1 00:12:06.784 --rc geninfo_all_blocks=1 00:12:06.784 --rc geninfo_unexecuted_blocks=1 00:12:06.784 00:12:06.784 ' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.784 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.785 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.687 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:12:08.688 00:12:08.688 --- 10.0.0.2 ping statistics --- 00:12:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.688 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:12:08.688 00:12:08.688 --- 10.0.0.1 ping statistics --- 00:12:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.688 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:08.688 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2409497 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2409497 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2409497 ']' 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.689 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.949 [2024-11-28 18:14:20.815881] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:08.949 [2024-11-28 18:14:20.816032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.949 [2024-11-28 18:14:20.972387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.208 [2024-11-28 18:14:21.118270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.208 [2024-11-28 18:14:21.118356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.208 [2024-11-28 18:14:21.118383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.208 [2024-11-28 18:14:21.118408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.208 [2024-11-28 18:14:21.118428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.208 [2024-11-28 18:14:21.121311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.208 [2024-11-28 18:14:21.121373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.208 [2024-11-28 18:14:21.121426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.208 [2024-11-28 18:14:21.121433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 [2024-11-28 18:14:21.786808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 Null1 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 [2024-11-28 18:14:21.840677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 Null2 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 Null3 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 Null4 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.037 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:10.296 00:12:10.296 Discovery Log Number of Records 6, Generation counter 6 00:12:10.296 =====Discovery Log Entry 0====== 00:12:10.296 trtype: tcp 00:12:10.296 adrfam: ipv4 00:12:10.296 subtype: current discovery subsystem 00:12:10.296 treq: not required 00:12:10.296 portid: 0 00:12:10.296 trsvcid: 4420 00:12:10.296 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.296 traddr: 10.0.0.2 00:12:10.296 eflags: explicit discovery connections, duplicate discovery information 00:12:10.296 sectype: none 00:12:10.296 =====Discovery Log Entry 1====== 00:12:10.296 trtype: tcp 00:12:10.296 adrfam: ipv4 00:12:10.296 subtype: nvme subsystem 00:12:10.296 treq: not required 00:12:10.296 portid: 0 00:12:10.296 trsvcid: 4420 00:12:10.296 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:10.296 traddr: 10.0.0.2 00:12:10.296 eflags: none 00:12:10.296 sectype: none 00:12:10.296 =====Discovery Log Entry 2====== 00:12:10.296 trtype: tcp 00:12:10.296 adrfam: ipv4 00:12:10.296 subtype: nvme subsystem 00:12:10.296 treq: not required 00:12:10.296 portid: 0 00:12:10.296 trsvcid: 4420 00:12:10.296 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:10.296 traddr: 10.0.0.2 00:12:10.296 eflags: none 00:12:10.296 sectype: none 00:12:10.296 =====Discovery Log Entry 3====== 00:12:10.296 trtype: tcp 00:12:10.296 adrfam: ipv4 00:12:10.296 subtype: nvme subsystem 00:12:10.296 treq: not required 00:12:10.296 portid: 0 00:12:10.296 trsvcid: 4420 00:12:10.296 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:10.296 traddr: 10.0.0.2 00:12:10.296 eflags: none 00:12:10.296 sectype: none 00:12:10.296 =====Discovery Log Entry 4====== 00:12:10.296 trtype: tcp 00:12:10.296 adrfam: ipv4 00:12:10.296 subtype: nvme subsystem 00:12:10.296 treq: not required 00:12:10.296 portid: 0 00:12:10.296 trsvcid: 4420 00:12:10.296 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:10.296 traddr: 10.0.0.2 00:12:10.297 eflags: none 00:12:10.297 sectype: none 00:12:10.297 =====Discovery Log Entry 5====== 00:12:10.297 trtype: tcp 00:12:10.297 adrfam: ipv4 00:12:10.297 subtype: discovery subsystem referral 00:12:10.297 treq: not required 00:12:10.297 portid: 0 00:12:10.297 trsvcid: 4430 00:12:10.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.297 traddr: 10.0.0.2 00:12:10.297 eflags: none 00:12:10.297 sectype: none 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:10.297 Perform nvmf subsystem discovery via RPC 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 [ 00:12:10.297 { 00:12:10.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:10.297 "subtype": "Discovery", 00:12:10.297 "listen_addresses": [ 00:12:10.297 { 00:12:10.297 "trtype": "TCP", 00:12:10.297 "adrfam": "IPv4", 00:12:10.297 "traddr": "10.0.0.2", 00:12:10.297 "trsvcid": "4420" 00:12:10.297 } 00:12:10.297 ], 00:12:10.297 "allow_any_host": true, 00:12:10.297 "hosts": [] 00:12:10.297 }, 00:12:10.297 { 00:12:10.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.297 "subtype": "NVMe", 00:12:10.297 "listen_addresses": [ 00:12:10.297 { 00:12:10.297 "trtype": "TCP", 00:12:10.297 "adrfam": "IPv4", 00:12:10.297 "traddr": "10.0.0.2", 00:12:10.297 "trsvcid": "4420" 00:12:10.297 } 00:12:10.297 ], 00:12:10.297 "allow_any_host": true, 00:12:10.297 "hosts": [], 00:12:10.297 "serial_number": "SPDK00000000000001", 00:12:10.297 "model_number": "SPDK bdev Controller", 00:12:10.297 "max_namespaces": 32, 00:12:10.297 "min_cntlid": 1, 00:12:10.297 "max_cntlid": 65519, 00:12:10.297 "namespaces": [ 00:12:10.297 { 00:12:10.297 "nsid": 1, 00:12:10.297 "bdev_name": "Null1", 00:12:10.297 "name": "Null1", 00:12:10.297 "nguid": "5E4ABCC13C11499B9E915A9F649BE90F", 00:12:10.297 "uuid": "5e4abcc1-3c11-499b-9e91-5a9f649be90f" 00:12:10.297 } 00:12:10.297 ] 00:12:10.297 }, 00:12:10.297 { 00:12:10.297 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:10.297 "subtype": "NVMe", 00:12:10.297 "listen_addresses": [ 00:12:10.297 { 00:12:10.297 "trtype": "TCP", 00:12:10.297 "adrfam": "IPv4", 00:12:10.297 "traddr": "10.0.0.2", 00:12:10.297 "trsvcid": "4420" 00:12:10.297 } 00:12:10.297 ], 00:12:10.297 "allow_any_host": true, 00:12:10.297 "hosts": [], 00:12:10.297 "serial_number": "SPDK00000000000002", 00:12:10.297 "model_number": "SPDK bdev Controller", 00:12:10.297 "max_namespaces": 32, 00:12:10.297 "min_cntlid": 1, 00:12:10.297 "max_cntlid": 65519, 00:12:10.297 "namespaces": [ 00:12:10.297 { 00:12:10.297 "nsid": 1, 00:12:10.297 "bdev_name": "Null2", 00:12:10.297 "name": "Null2", 00:12:10.297 "nguid": "10CC5A8193A64CBDB2DEA3A6E0717A19", 00:12:10.297 "uuid": "10cc5a81-93a6-4cbd-b2de-a3a6e0717a19" 00:12:10.297 } 00:12:10.297 ] 00:12:10.297 }, 00:12:10.297 { 00:12:10.297 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:10.297 "subtype": "NVMe", 00:12:10.297 "listen_addresses": [ 00:12:10.297 { 00:12:10.297 "trtype": "TCP", 00:12:10.297 "adrfam": "IPv4", 00:12:10.297 "traddr": "10.0.0.2", 00:12:10.297 "trsvcid": "4420" 00:12:10.297 } 00:12:10.297 ], 00:12:10.297 "allow_any_host": true, 00:12:10.297 "hosts": [], 00:12:10.297 "serial_number": "SPDK00000000000003", 00:12:10.297 "model_number": "SPDK bdev Controller", 00:12:10.297 "max_namespaces": 32, 00:12:10.297 "min_cntlid": 1, 00:12:10.297 "max_cntlid": 65519, 00:12:10.297 "namespaces": [ 00:12:10.297 { 00:12:10.297 "nsid": 1, 00:12:10.297 "bdev_name": "Null3", 00:12:10.297 "name": "Null3", 00:12:10.297 "nguid": "6F45E9D3EBBC4C9DA96049773561F943", 00:12:10.297 "uuid": "6f45e9d3-ebbc-4c9d-a960-49773561f943" 00:12:10.297 } 00:12:10.297 ] 00:12:10.297 }, 00:12:10.297 { 00:12:10.297 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:10.297 "subtype": "NVMe", 00:12:10.297 "listen_addresses": [ 00:12:10.297 { 00:12:10.297 "trtype": "TCP", 00:12:10.297 "adrfam": "IPv4", 00:12:10.297 "traddr": "10.0.0.2", 00:12:10.297 "trsvcid": "4420" 00:12:10.297 } 00:12:10.297 ], 00:12:10.297 "allow_any_host": true, 00:12:10.297 "hosts": [], 00:12:10.297 "serial_number": "SPDK00000000000004", 00:12:10.297 "model_number": "SPDK bdev Controller", 00:12:10.297 "max_namespaces": 32, 00:12:10.297 "min_cntlid": 1, 00:12:10.297 "max_cntlid": 65519, 00:12:10.297 "namespaces": [ 00:12:10.297 { 00:12:10.297 "nsid": 1, 00:12:10.297 "bdev_name": "Null4", 00:12:10.297 "name": "Null4", 00:12:10.297 "nguid": "9770B448D00A46789A77E8E2C94DB5E4", 00:12:10.297 "uuid": "9770b448-d00a-4678-9a77-e8e2c94db5e4" 00:12:10.297 } 00:12:10.297 ] 00:12:10.297 } 00:12:10.297 ] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:10.297 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.298 rmmod nvme_tcp 00:12:10.298 rmmod nvme_fabrics 00:12:10.298 rmmod nvme_keyring 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2409497 ']' 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2409497 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2409497 ']' 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2409497 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.298 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409497 00:12:10.557 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.557 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.557 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409497' 00:12:10.557 killing process with pid 2409497 00:12:10.557 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2409497 00:12:10.557 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2409497 00:12:11.495 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.495 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.495 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.495 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.496 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.036 00:12:14.036 real 0m7.292s 00:12:14.036 user 0m9.680s 00:12:14.036 sys 0m2.147s 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.036 ************************************ 00:12:14.036 END TEST nvmf_target_discovery 00:12:14.036 ************************************ 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.036 ************************************ 00:12:14.036 START TEST nvmf_referrals 00:12:14.036 ************************************ 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:14.036 * Looking for test storage... 00:12:14.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.036 --rc genhtml_branch_coverage=1 00:12:14.036 --rc genhtml_function_coverage=1 00:12:14.036 --rc genhtml_legend=1 00:12:14.036 --rc geninfo_all_blocks=1 00:12:14.036 --rc geninfo_unexecuted_blocks=1 00:12:14.036 00:12:14.036 ' 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.036 --rc genhtml_branch_coverage=1 00:12:14.036 --rc genhtml_function_coverage=1 00:12:14.036 --rc genhtml_legend=1 00:12:14.036 --rc geninfo_all_blocks=1 00:12:14.036 --rc geninfo_unexecuted_blocks=1 00:12:14.036 00:12:14.036 ' 00:12:14.036 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:14.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.037 --rc genhtml_branch_coverage=1 00:12:14.037 --rc genhtml_function_coverage=1 00:12:14.037 --rc genhtml_legend=1 00:12:14.037 --rc geninfo_all_blocks=1 00:12:14.037 --rc geninfo_unexecuted_blocks=1 00:12:14.037 00:12:14.037 ' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:14.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.037 --rc genhtml_branch_coverage=1 00:12:14.037 --rc genhtml_function_coverage=1 00:12:14.037 --rc genhtml_legend=1 00:12:14.037 --rc geninfo_all_blocks=1 00:12:14.037 --rc geninfo_unexecuted_blocks=1 00:12:14.037 00:12:14.037 ' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.037 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.941 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:15.942 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:15.942 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:15.942 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:15.942 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:12:15.942 00:12:15.942 --- 10.0.0.2 ping statistics --- 00:12:15.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.942 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:15.942 00:12:15.942 --- 10.0.0.1 ping statistics --- 00:12:15.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.942 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:15.942 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.943 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.943 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.943 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.943 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.943 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.943 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2411740 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2411740 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2411740 ']' 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.943 18:14:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.202 [2024-11-28 18:14:28.117458] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:16.202 [2024-11-28 18:14:28.117627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.202 [2024-11-28 18:14:28.272169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.460 [2024-11-28 18:14:28.418934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.460 [2024-11-28 18:14:28.419004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.460 [2024-11-28 18:14:28.419030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.461 [2024-11-28 18:14:28.419055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.461 [2024-11-28 18:14:28.419075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.461 [2024-11-28 18:14:28.421872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.461 [2024-11-28 18:14:28.422035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.461 [2024-11-28 18:14:28.422082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.461 [2024-11-28 18:14:28.422087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 [2024-11-28 18:14:29.090393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 [2024-11-28 18:14:29.114758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.029 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:17.289 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:17.290 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:17.290 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.290 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.290 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.290 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.290 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.550 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.811 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.070 18:14:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:18.070 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:18.070 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:18.070 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:18.070 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:18.070 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.070 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.330 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.591 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.851 18:14:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:19.110 rmmod nvme_tcp 00:12:19.110 rmmod nvme_fabrics 00:12:19.110 rmmod nvme_keyring 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2411740 ']' 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2411740 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2411740 ']' 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2411740 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.110 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2411740 00:12:19.369 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.369 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.369 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2411740' 00:12:19.369 killing process with pid 2411740 00:12:19.369 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2411740 00:12:19.369 18:14:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2411740 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.307 18:14:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.844 00:12:22.844 real 0m8.785s 00:12:22.844 user 0m16.331s 00:12:22.844 sys 0m2.517s 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.844 ************************************ 00:12:22.844 END TEST nvmf_referrals 00:12:22.844 ************************************ 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:22.844 ************************************ 00:12:22.844 START TEST nvmf_connect_disconnect 00:12:22.844 ************************************ 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:22.844 * Looking for test storage... 00:12:22.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:22.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.844 --rc genhtml_branch_coverage=1 00:12:22.844 --rc genhtml_function_coverage=1 00:12:22.844 --rc genhtml_legend=1 00:12:22.844 --rc geninfo_all_blocks=1 00:12:22.844 --rc geninfo_unexecuted_blocks=1 00:12:22.844 00:12:22.844 ' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:22.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.844 --rc genhtml_branch_coverage=1 00:12:22.844 --rc genhtml_function_coverage=1 00:12:22.844 --rc genhtml_legend=1 00:12:22.844 --rc geninfo_all_blocks=1 00:12:22.844 --rc geninfo_unexecuted_blocks=1 00:12:22.844 00:12:22.844 ' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:22.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.844 --rc genhtml_branch_coverage=1 00:12:22.844 --rc genhtml_function_coverage=1 00:12:22.844 --rc genhtml_legend=1 00:12:22.844 --rc geninfo_all_blocks=1 00:12:22.844 --rc geninfo_unexecuted_blocks=1 00:12:22.844 00:12:22.844 ' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:22.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.844 --rc genhtml_branch_coverage=1 00:12:22.844 --rc genhtml_function_coverage=1 00:12:22.844 --rc genhtml_legend=1 00:12:22.844 --rc geninfo_all_blocks=1 00:12:22.844 --rc geninfo_unexecuted_blocks=1 00:12:22.844 00:12:22.844 ' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.844 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.845 18:14:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:24.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:24.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:24.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:24.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.747 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:12:25.006 00:12:25.006 --- 10.0.0.2 ping statistics --- 00:12:25.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.006 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:12:25.006 00:12:25.006 --- 10.0.0.1 ping statistics --- 00:12:25.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.006 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2414300 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2414300 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2414300 ']' 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.006 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 [2024-11-28 18:14:37.050477] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:25.006 [2024-11-28 18:14:37.050630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.264 [2024-11-28 18:14:37.209809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.264 [2024-11-28 18:14:37.356301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.264 [2024-11-28 18:14:37.356384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.264 [2024-11-28 18:14:37.356411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.264 [2024-11-28 18:14:37.356435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.264 [2024-11-28 18:14:37.356455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.264 [2024-11-28 18:14:37.359206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.264 [2024-11-28 18:14:37.359256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.264 [2024-11-28 18:14:37.359318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.264 [2024-11-28 18:14:37.359323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.199 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.199 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:26.199 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.199 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.199 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.199 [2024-11-28 18:14:38.031221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.199 [2024-11-28 18:14:38.160581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:26.199 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:28.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.454 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:20.454 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:20.454 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.454 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:20.454 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.454 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.455 rmmod nvme_tcp 00:16:20.455 rmmod nvme_fabrics 00:16:20.455 rmmod nvme_keyring 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2414300 ']' 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2414300 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2414300 ']' 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2414300 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2414300 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2414300' 00:16:20.455 killing process with pid 2414300 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2414300 00:16:20.455 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2414300 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.833 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:24.372 00:16:24.372 real 4m1.437s 00:16:24.372 user 15m11.454s 00:16:24.372 sys 0m40.200s 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:24.372 ************************************ 00:16:24.372 END TEST nvmf_connect_disconnect 00:16:24.372 ************************************ 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:24.372 ************************************ 00:16:24.372 START TEST nvmf_multitarget 00:16:24.372 ************************************ 00:16:24.372 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:24.372 * Looking for test storage... 00:16:24.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.372 --rc genhtml_branch_coverage=1 00:16:24.372 --rc genhtml_function_coverage=1 00:16:24.372 --rc genhtml_legend=1 00:16:24.372 --rc geninfo_all_blocks=1 00:16:24.372 --rc geninfo_unexecuted_blocks=1 00:16:24.372 00:16:24.372 ' 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.372 --rc genhtml_branch_coverage=1 00:16:24.372 --rc genhtml_function_coverage=1 00:16:24.372 --rc genhtml_legend=1 00:16:24.372 --rc geninfo_all_blocks=1 00:16:24.372 --rc geninfo_unexecuted_blocks=1 00:16:24.372 00:16:24.372 ' 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.372 --rc genhtml_branch_coverage=1 00:16:24.372 --rc genhtml_function_coverage=1 00:16:24.372 --rc genhtml_legend=1 00:16:24.372 --rc geninfo_all_blocks=1 00:16:24.372 --rc geninfo_unexecuted_blocks=1 00:16:24.372 00:16:24.372 ' 00:16:24.372 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.372 --rc genhtml_branch_coverage=1 00:16:24.372 --rc genhtml_function_coverage=1 00:16:24.372 --rc genhtml_legend=1 00:16:24.372 --rc geninfo_all_blocks=1 00:16:24.372 --rc geninfo_unexecuted_blocks=1 00:16:24.373 00:16:24.373 ' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:24.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:24.373 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:26.275 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:26.275 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:26.275 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:26.275 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.275 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:26.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:16:26.276 00:16:26.276 --- 10.0.0.2 ping statistics --- 00:16:26.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.276 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:16:26.276 00:16:26.276 --- 10.0.0.1 ping statistics --- 00:16:26.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.276 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2446529 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2446529 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2446529 ']' 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.276 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.533 [2024-11-28 18:18:38.450033] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:26.533 [2024-11-28 18:18:38.450204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.533 [2024-11-28 18:18:38.611825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.790 [2024-11-28 18:18:38.755711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.790 [2024-11-28 18:18:38.755772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.790 [2024-11-28 18:18:38.755793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.790 [2024-11-28 18:18:38.755813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.790 [2024-11-28 18:18:38.755830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.790 [2024-11-28 18:18:38.758488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.790 [2024-11-28 18:18:38.758534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.790 [2024-11-28 18:18:38.758596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.790 [2024-11-28 18:18:38.758601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:27.356 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:27.612 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:27.612 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:27.612 "nvmf_tgt_1" 00:16:27.612 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:27.870 "nvmf_tgt_2" 00:16:27.870 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:27.870 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:27.870 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:27.870 18:18:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:28.127 true 00:16:28.127 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:28.127 true 00:16:28.127 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:28.127 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:28.384 rmmod nvme_tcp 00:16:28.384 rmmod nvme_fabrics 00:16:28.384 rmmod nvme_keyring 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2446529 ']' 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2446529 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2446529 ']' 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2446529 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2446529 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2446529' 00:16:28.384 killing process with pid 2446529 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2446529 00:16:28.384 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2446529 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.761 18:18:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:31.668 00:16:31.668 real 0m7.584s 00:16:31.668 user 0m12.178s 00:16:31.668 sys 0m2.199s 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:31.668 ************************************ 00:16:31.668 END TEST nvmf_multitarget 00:16:31.668 ************************************ 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.668 ************************************ 00:16:31.668 START TEST nvmf_rpc 00:16:31.668 ************************************ 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:31.668 * Looking for test storage... 00:16:31.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.668 --rc genhtml_branch_coverage=1 00:16:31.668 --rc genhtml_function_coverage=1 00:16:31.668 --rc genhtml_legend=1 00:16:31.668 --rc geninfo_all_blocks=1 00:16:31.668 --rc geninfo_unexecuted_blocks=1 00:16:31.668 00:16:31.668 ' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.668 --rc genhtml_branch_coverage=1 00:16:31.668 --rc genhtml_function_coverage=1 00:16:31.668 --rc genhtml_legend=1 00:16:31.668 --rc geninfo_all_blocks=1 00:16:31.668 --rc geninfo_unexecuted_blocks=1 00:16:31.668 00:16:31.668 ' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.668 --rc genhtml_branch_coverage=1 00:16:31.668 --rc genhtml_function_coverage=1 00:16:31.668 --rc genhtml_legend=1 00:16:31.668 --rc geninfo_all_blocks=1 00:16:31.668 --rc geninfo_unexecuted_blocks=1 00:16:31.668 00:16:31.668 ' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.668 --rc genhtml_branch_coverage=1 00:16:31.668 --rc genhtml_function_coverage=1 00:16:31.668 --rc genhtml_legend=1 00:16:31.668 --rc geninfo_all_blocks=1 00:16:31.668 --rc geninfo_unexecuted_blocks=1 00:16:31.668 00:16:31.668 ' 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.668 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:31.669 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.576 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:33.577 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:33.577 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:33.577 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:33.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.577 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:33.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:16:33.836 00:16:33.836 --- 10.0.0.2 ping statistics --- 00:16:33.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.836 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:16:33.836 00:16:33.836 --- 10.0.0.1 ping statistics --- 00:16:33.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.836 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:33.836 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2448890 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2448890 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2448890 ']' 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.837 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.837 [2024-11-28 18:18:45.897460] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:33.837 [2024-11-28 18:18:45.897609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.097 [2024-11-28 18:18:46.053655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.357 [2024-11-28 18:18:46.199869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.357 [2024-11-28 18:18:46.199952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.357 [2024-11-28 18:18:46.199978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.358 [2024-11-28 18:18:46.200002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.358 [2024-11-28 18:18:46.200021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.358 [2024-11-28 18:18:46.203119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.358 [2024-11-28 18:18:46.203172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.358 [2024-11-28 18:18:46.203224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.358 [2024-11-28 18:18:46.203232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.925 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:34.925 "tick_rate": 2700000000, 00:16:34.925 "poll_groups": [ 00:16:34.925 { 00:16:34.925 "name": "nvmf_tgt_poll_group_000", 00:16:34.925 "admin_qpairs": 0, 00:16:34.925 "io_qpairs": 0, 00:16:34.925 "current_admin_qpairs": 0, 00:16:34.925 "current_io_qpairs": 0, 00:16:34.925 "pending_bdev_io": 0, 00:16:34.925 "completed_nvme_io": 0, 00:16:34.925 "transports": [] 00:16:34.925 }, 00:16:34.925 { 00:16:34.925 "name": "nvmf_tgt_poll_group_001", 00:16:34.925 "admin_qpairs": 0, 00:16:34.925 "io_qpairs": 0, 00:16:34.925 "current_admin_qpairs": 0, 00:16:34.925 "current_io_qpairs": 0, 00:16:34.925 "pending_bdev_io": 0, 00:16:34.925 "completed_nvme_io": 0, 00:16:34.925 "transports": [] 00:16:34.925 }, 00:16:34.925 { 00:16:34.925 "name": "nvmf_tgt_poll_group_002", 00:16:34.925 "admin_qpairs": 0, 00:16:34.925 "io_qpairs": 0, 00:16:34.925 "current_admin_qpairs": 0, 00:16:34.926 "current_io_qpairs": 0, 00:16:34.926 "pending_bdev_io": 0, 00:16:34.926 "completed_nvme_io": 0, 00:16:34.926 "transports": [] 00:16:34.926 }, 00:16:34.926 { 00:16:34.926 "name": "nvmf_tgt_poll_group_003", 00:16:34.926 "admin_qpairs": 0, 00:16:34.926 "io_qpairs": 0, 00:16:34.926 "current_admin_qpairs": 0, 00:16:34.926 "current_io_qpairs": 0, 00:16:34.926 "pending_bdev_io": 0, 00:16:34.926 "completed_nvme_io": 0, 00:16:34.926 "transports": [] 00:16:34.926 } 00:16:34.926 ] 00:16:34.926 }' 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.926 [2024-11-28 18:18:46.972249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.926 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.926 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.926 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:34.926 "tick_rate": 2700000000, 00:16:34.926 "poll_groups": [ 00:16:34.926 { 00:16:34.926 "name": "nvmf_tgt_poll_group_000", 00:16:34.926 "admin_qpairs": 0, 00:16:34.926 "io_qpairs": 0, 00:16:34.926 "current_admin_qpairs": 0, 00:16:34.926 "current_io_qpairs": 0, 00:16:34.926 "pending_bdev_io": 0, 00:16:34.926 "completed_nvme_io": 0, 00:16:34.926 "transports": [ 00:16:34.926 { 00:16:34.926 "trtype": "TCP" 00:16:34.926 } 00:16:34.926 ] 00:16:34.926 }, 00:16:34.926 { 00:16:34.926 "name": "nvmf_tgt_poll_group_001", 00:16:34.926 "admin_qpairs": 0, 00:16:34.926 "io_qpairs": 0, 00:16:34.926 "current_admin_qpairs": 0, 00:16:34.926 "current_io_qpairs": 0, 00:16:34.926 "pending_bdev_io": 0, 00:16:34.926 "completed_nvme_io": 0, 00:16:34.926 "transports": [ 00:16:34.926 { 00:16:34.926 "trtype": "TCP" 00:16:34.926 } 00:16:34.926 ] 00:16:34.926 }, 00:16:34.926 { 00:16:34.926 "name": "nvmf_tgt_poll_group_002", 00:16:34.926 "admin_qpairs": 0, 00:16:34.926 "io_qpairs": 0, 00:16:34.926 "current_admin_qpairs": 0, 00:16:34.926 "current_io_qpairs": 0, 00:16:34.926 "pending_bdev_io": 0, 00:16:34.926 "completed_nvme_io": 0, 00:16:34.926 "transports": [ 00:16:34.926 { 00:16:34.926 "trtype": "TCP" 00:16:34.926 } 00:16:34.926 ] 00:16:34.926 }, 00:16:34.926 { 00:16:34.926 "name": "nvmf_tgt_poll_group_003", 00:16:34.926 "admin_qpairs": 0, 00:16:34.926 "io_qpairs": 0, 00:16:34.926 "current_admin_qpairs": 0, 00:16:34.926 "current_io_qpairs": 0, 00:16:34.926 "pending_bdev_io": 0, 00:16:34.926 "completed_nvme_io": 0, 00:16:34.926 "transports": [ 00:16:34.926 { 00:16:34.926 "trtype": "TCP" 00:16:34.926 } 00:16:34.926 ] 00:16:34.926 } 00:16:34.926 ] 00:16:34.926 }' 00:16:34.926 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:34.926 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:34.926 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:34.926 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.187 Malloc1 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.187 [2024-11-28 18:18:47.176011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:35.187 [2024-11-28 18:18:47.199334] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:35.187 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:35.187 could not add new controller: failed to write to nvme-fabrics device 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.187 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.188 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.188 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.188 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.188 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.188 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.188 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:36.128 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:36.128 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:36.128 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.128 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:36.128 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:38.034 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.034 [2024-11-28 18:18:50.105408] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:38.034 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:38.034 could not add new controller: failed to write to nvme-fabrics device 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.034 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.294 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.294 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.865 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.865 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:38.865 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.865 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:38.865 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:40.799 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.057 [2024-11-28 18:18:52.974324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.057 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.058 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.624 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.624 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.624 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.624 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:41.624 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:43.531 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 [2024-11-28 18:18:55.824071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.792 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.729 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.729 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:44.729 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.729 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:44.729 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 [2024-11-28 18:18:58.719570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.634 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.895 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.895 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.463 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.463 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.463 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.463 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.463 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.367 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.626 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.626 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.626 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.626 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.626 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.627 [2024-11-28 18:19:01.652923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.627 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.566 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.566 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:50.566 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.566 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:50.566 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:52.470 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.734 [2024-11-28 18:19:04.629066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.734 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:53.412 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.412 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:53.412 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.412 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:53.412 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.317 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 [2024-11-28 18:19:07.442551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 [2024-11-28 18:19:07.490641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 [2024-11-28 18:19:07.538822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.578 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 [2024-11-28 18:19:07.586984] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 [2024-11-28 18:19:07.635152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.579 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.839 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.839 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:55.839 "tick_rate": 2700000000, 00:16:55.839 "poll_groups": [ 00:16:55.840 { 00:16:55.840 "name": "nvmf_tgt_poll_group_000", 00:16:55.840 "admin_qpairs": 2, 00:16:55.840 "io_qpairs": 84, 00:16:55.840 "current_admin_qpairs": 0, 00:16:55.840 "current_io_qpairs": 0, 00:16:55.840 "pending_bdev_io": 0, 00:16:55.840 "completed_nvme_io": 137, 00:16:55.840 "transports": [ 00:16:55.840 { 00:16:55.840 "trtype": "TCP" 00:16:55.840 } 00:16:55.840 ] 00:16:55.840 }, 00:16:55.840 { 00:16:55.840 "name": "nvmf_tgt_poll_group_001", 00:16:55.840 "admin_qpairs": 2, 00:16:55.840 "io_qpairs": 84, 00:16:55.840 "current_admin_qpairs": 0, 00:16:55.840 "current_io_qpairs": 0, 00:16:55.840 "pending_bdev_io": 0, 00:16:55.840 "completed_nvme_io": 140, 00:16:55.840 "transports": [ 00:16:55.840 { 00:16:55.840 "trtype": "TCP" 00:16:55.840 } 00:16:55.840 ] 00:16:55.840 }, 00:16:55.840 { 00:16:55.840 "name": "nvmf_tgt_poll_group_002", 00:16:55.840 "admin_qpairs": 1, 00:16:55.840 "io_qpairs": 84, 00:16:55.840 "current_admin_qpairs": 0, 00:16:55.840 "current_io_qpairs": 0, 00:16:55.840 "pending_bdev_io": 0, 00:16:55.840 "completed_nvme_io": 227, 00:16:55.840 "transports": [ 00:16:55.840 { 00:16:55.840 "trtype": "TCP" 00:16:55.840 } 00:16:55.840 ] 00:16:55.840 }, 00:16:55.840 { 00:16:55.840 "name": "nvmf_tgt_poll_group_003", 00:16:55.840 "admin_qpairs": 2, 00:16:55.840 "io_qpairs": 84, 00:16:55.840 "current_admin_qpairs": 0, 00:16:55.840 "current_io_qpairs": 0, 00:16:55.840 "pending_bdev_io": 0, 00:16:55.840 "completed_nvme_io": 182, 00:16:55.840 "transports": [ 00:16:55.840 { 00:16:55.840 "trtype": "TCP" 00:16:55.840 } 00:16:55.840 ] 00:16:55.840 } 00:16:55.840 ] 00:16:55.840 }' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.840 rmmod nvme_tcp 00:16:55.840 rmmod nvme_fabrics 00:16:55.840 rmmod nvme_keyring 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2448890 ']' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2448890 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2448890 ']' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2448890 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2448890 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2448890' 00:16:55.840 killing process with pid 2448890 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2448890 00:16:55.840 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2448890 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.220 18:19:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.131 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.131 00:16:59.131 real 0m27.618s 00:16:59.131 user 1m29.872s 00:16:59.131 sys 0m4.471s 00:16:59.131 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.131 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.131 ************************************ 00:16:59.131 END TEST nvmf_rpc 00:16:59.131 ************************************ 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.390 ************************************ 00:16:59.390 START TEST nvmf_invalid 00:16:59.390 ************************************ 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:59.390 * Looking for test storage... 00:16:59.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.390 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.391 --rc genhtml_branch_coverage=1 00:16:59.391 --rc genhtml_function_coverage=1 00:16:59.391 --rc genhtml_legend=1 00:16:59.391 --rc geninfo_all_blocks=1 00:16:59.391 --rc geninfo_unexecuted_blocks=1 00:16:59.391 00:16:59.391 ' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.391 --rc genhtml_branch_coverage=1 00:16:59.391 --rc genhtml_function_coverage=1 00:16:59.391 --rc genhtml_legend=1 00:16:59.391 --rc geninfo_all_blocks=1 00:16:59.391 --rc geninfo_unexecuted_blocks=1 00:16:59.391 00:16:59.391 ' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.391 --rc genhtml_branch_coverage=1 00:16:59.391 --rc genhtml_function_coverage=1 00:16:59.391 --rc genhtml_legend=1 00:16:59.391 --rc geninfo_all_blocks=1 00:16:59.391 --rc geninfo_unexecuted_blocks=1 00:16:59.391 00:16:59.391 ' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.391 --rc genhtml_branch_coverage=1 00:16:59.391 --rc genhtml_function_coverage=1 00:16:59.391 --rc genhtml_legend=1 00:16:59.391 --rc geninfo_all_blocks=1 00:16:59.391 --rc geninfo_unexecuted_blocks=1 00:16:59.391 00:16:59.391 ' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.391 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.929 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:01.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:17:01.930 00:17:01.930 --- 10.0.0.2 ping statistics --- 00:17:01.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.930 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:01.930 00:17:01.930 --- 10.0.0.1 ping statistics --- 00:17:01.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.930 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2453659 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2453659 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2453659 ']' 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:01.930 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:01.930 [2024-11-28 18:19:13.746620] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:01.930 [2024-11-28 18:19:13.746773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.930 [2024-11-28 18:19:13.907401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.188 [2024-11-28 18:19:14.052156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.188 [2024-11-28 18:19:14.052248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.188 [2024-11-28 18:19:14.052274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.188 [2024-11-28 18:19:14.052298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.188 [2024-11-28 18:19:14.052318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.188 [2024-11-28 18:19:14.055106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.188 [2024-11-28 18:19:14.055154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.188 [2024-11-28 18:19:14.055199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.188 [2024-11-28 18:19:14.055192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:02.753 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5605 00:17:03.011 [2024-11-28 18:19:14.974055] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:03.011 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:03.011 { 00:17:03.011 "nqn": "nqn.2016-06.io.spdk:cnode5605", 00:17:03.011 "tgt_name": "foobar", 00:17:03.011 "method": "nvmf_create_subsystem", 00:17:03.011 "req_id": 1 00:17:03.011 } 00:17:03.011 Got JSON-RPC error response 00:17:03.011 response: 00:17:03.011 { 00:17:03.011 "code": -32603, 00:17:03.011 "message": "Unable to find target foobar" 00:17:03.011 }' 00:17:03.011 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:03.011 { 00:17:03.011 "nqn": "nqn.2016-06.io.spdk:cnode5605", 00:17:03.011 "tgt_name": "foobar", 00:17:03.011 "method": "nvmf_create_subsystem", 00:17:03.011 "req_id": 1 00:17:03.011 } 00:17:03.011 Got JSON-RPC error response 00:17:03.011 response: 00:17:03.011 { 00:17:03.011 "code": -32603, 00:17:03.011 "message": "Unable to find target foobar" 00:17:03.011 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:03.011 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:03.011 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15029 00:17:03.270 [2024-11-28 18:19:15.259121] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15029: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:03.270 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:03.270 { 00:17:03.270 "nqn": "nqn.2016-06.io.spdk:cnode15029", 00:17:03.270 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:03.270 "method": "nvmf_create_subsystem", 00:17:03.270 "req_id": 1 00:17:03.270 } 00:17:03.270 Got JSON-RPC error response 00:17:03.270 response: 00:17:03.270 { 00:17:03.270 "code": -32602, 00:17:03.270 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:03.270 }' 00:17:03.270 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:03.270 { 00:17:03.270 "nqn": "nqn.2016-06.io.spdk:cnode15029", 00:17:03.270 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:03.270 "method": "nvmf_create_subsystem", 00:17:03.270 "req_id": 1 00:17:03.270 } 00:17:03.270 Got JSON-RPC error response 00:17:03.270 response: 00:17:03.270 { 00:17:03.270 "code": -32602, 00:17:03.270 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:03.270 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.270 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:03.270 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10281 00:17:03.528 [2024-11-28 18:19:15.548188] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10281: invalid model number 'SPDK_Controller' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:03.528 { 00:17:03.528 "nqn": "nqn.2016-06.io.spdk:cnode10281", 00:17:03.528 "model_number": "SPDK_Controller\u001f", 00:17:03.528 "method": "nvmf_create_subsystem", 00:17:03.528 "req_id": 1 00:17:03.528 } 00:17:03.528 Got JSON-RPC error response 00:17:03.528 response: 00:17:03.528 { 00:17:03.528 "code": -32602, 00:17:03.528 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.528 }' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:03.528 { 00:17:03.528 "nqn": "nqn.2016-06.io.spdk:cnode10281", 00:17:03.528 "model_number": "SPDK_Controller\u001f", 00:17:03.528 "method": "nvmf_create_subsystem", 00:17:03.528 "req_id": 1 00:17:03.528 } 00:17:03.528 Got JSON-RPC error response 00:17:03.528 response: 00:17:03.528 { 00:17:03.528 "code": -32602, 00:17:03.528 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.528 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:03.528 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.529 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:17:03.786 18:19:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '(\TP}(*koBquCFo#SigKy' 00:17:04.049 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Dcy#=A]| uCFo#SigKy' nqn.2016-06.io.spdk:cnode3771 00:17:04.307 [2024-11-28 18:19:16.374951] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3771: invalid model number 'Dcy#=A]| uCFo#SigKy' 00:17:04.307 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:04.307 { 00:17:04.307 "nqn": "nqn.2016-06.io.spdk:cnode3771", 00:17:04.307 "model_number": "Dcy#=A]| uCFo#SigKy", 00:17:04.307 "method": "nvmf_create_subsystem", 00:17:04.307 "req_id": 1 00:17:04.307 } 00:17:04.307 Got JSON-RPC error response 00:17:04.307 response: 00:17:04.307 { 00:17:04.307 "code": -32602, 00:17:04.307 "message": "Invalid MN Dcy#=A]| uCFo#SigKy" 00:17:04.307 }' 00:17:04.307 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:04.307 { 00:17:04.307 "nqn": "nqn.2016-06.io.spdk:cnode3771", 00:17:04.307 "model_number": "Dcy#=A]| uCFo#SigKy", 00:17:04.307 "method": "nvmf_create_subsystem", 00:17:04.307 "req_id": 1 00:17:04.307 } 00:17:04.307 Got JSON-RPC error response 00:17:04.307 response: 00:17:04.307 { 00:17:04.307 "code": -32602, 00:17:04.307 "message": "Invalid MN Dcy#=A]| uCFo#SigKy" 00:17:04.307 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:04.307 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:04.565 [2024-11-28 18:19:16.664044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.823 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:05.081 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:05.081 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:05.081 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:05.081 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:05.081 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:05.339 [2024-11-28 18:19:17.219619] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:05.339 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:05.339 { 00:17:05.339 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:05.339 "listen_address": { 00:17:05.339 "trtype": "tcp", 00:17:05.339 "traddr": "", 00:17:05.339 "trsvcid": "4421" 00:17:05.339 }, 00:17:05.339 "method": "nvmf_subsystem_remove_listener", 00:17:05.339 "req_id": 1 00:17:05.339 } 00:17:05.339 Got JSON-RPC error response 00:17:05.339 response: 00:17:05.339 { 00:17:05.339 "code": -32602, 00:17:05.339 "message": "Invalid parameters" 00:17:05.339 }' 00:17:05.339 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:05.339 { 00:17:05.339 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:05.339 "listen_address": { 00:17:05.339 "trtype": "tcp", 00:17:05.339 "traddr": "", 00:17:05.339 "trsvcid": "4421" 00:17:05.339 }, 00:17:05.339 "method": "nvmf_subsystem_remove_listener", 00:17:05.339 "req_id": 1 00:17:05.339 } 00:17:05.339 Got JSON-RPC error response 00:17:05.339 response: 00:17:05.339 { 00:17:05.339 "code": -32602, 00:17:05.339 "message": "Invalid parameters" 00:17:05.339 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:05.339 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4885 -i 0 00:17:05.597 [2024-11-28 18:19:17.504489] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4885: invalid cntlid range [0-65519] 00:17:05.597 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:05.597 { 00:17:05.597 "nqn": "nqn.2016-06.io.spdk:cnode4885", 00:17:05.597 "min_cntlid": 0, 00:17:05.597 "method": "nvmf_create_subsystem", 00:17:05.597 "req_id": 1 00:17:05.597 } 00:17:05.597 Got JSON-RPC error response 00:17:05.597 response: 00:17:05.597 { 00:17:05.597 "code": -32602, 00:17:05.597 "message": "Invalid cntlid range [0-65519]" 00:17:05.597 }' 00:17:05.597 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:05.597 { 00:17:05.597 "nqn": "nqn.2016-06.io.spdk:cnode4885", 00:17:05.597 "min_cntlid": 0, 00:17:05.597 "method": "nvmf_create_subsystem", 00:17:05.597 "req_id": 1 00:17:05.597 } 00:17:05.597 Got JSON-RPC error response 00:17:05.597 response: 00:17:05.597 { 00:17:05.597 "code": -32602, 00:17:05.597 "message": "Invalid cntlid range [0-65519]" 00:17:05.597 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.597 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5708 -i 65520 00:17:05.855 [2024-11-28 18:19:17.785410] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5708: invalid cntlid range [65520-65519] 00:17:05.855 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:05.855 { 00:17:05.855 "nqn": "nqn.2016-06.io.spdk:cnode5708", 00:17:05.855 "min_cntlid": 65520, 00:17:05.855 "method": "nvmf_create_subsystem", 00:17:05.855 "req_id": 1 00:17:05.855 } 00:17:05.855 Got JSON-RPC error response 00:17:05.855 response: 00:17:05.855 { 00:17:05.855 "code": -32602, 00:17:05.855 "message": "Invalid cntlid range [65520-65519]" 00:17:05.855 }' 00:17:05.855 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:05.855 { 00:17:05.855 "nqn": "nqn.2016-06.io.spdk:cnode5708", 00:17:05.855 "min_cntlid": 65520, 00:17:05.855 "method": "nvmf_create_subsystem", 00:17:05.855 "req_id": 1 00:17:05.855 } 00:17:05.855 Got JSON-RPC error response 00:17:05.855 response: 00:17:05.855 { 00:17:05.855 "code": -32602, 00:17:05.855 "message": "Invalid cntlid range [65520-65519]" 00:17:05.855 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.855 18:19:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17691 -I 0 00:17:06.113 [2024-11-28 18:19:18.054327] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17691: invalid cntlid range [1-0] 00:17:06.113 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:06.113 { 00:17:06.113 "nqn": "nqn.2016-06.io.spdk:cnode17691", 00:17:06.113 "max_cntlid": 0, 00:17:06.113 "method": "nvmf_create_subsystem", 00:17:06.113 "req_id": 1 00:17:06.113 } 00:17:06.113 Got JSON-RPC error response 00:17:06.113 response: 00:17:06.113 { 00:17:06.113 "code": -32602, 00:17:06.113 "message": "Invalid cntlid range [1-0]" 00:17:06.113 }' 00:17:06.113 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:06.113 { 00:17:06.113 "nqn": "nqn.2016-06.io.spdk:cnode17691", 00:17:06.113 "max_cntlid": 0, 00:17:06.113 "method": "nvmf_create_subsystem", 00:17:06.113 "req_id": 1 00:17:06.113 } 00:17:06.113 Got JSON-RPC error response 00:17:06.113 response: 00:17:06.113 { 00:17:06.113 "code": -32602, 00:17:06.113 "message": "Invalid cntlid range [1-0]" 00:17:06.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.113 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13534 -I 65520 00:17:06.371 [2024-11-28 18:19:18.323311] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13534: invalid cntlid range [1-65520] 00:17:06.372 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:06.372 { 00:17:06.372 "nqn": "nqn.2016-06.io.spdk:cnode13534", 00:17:06.372 "max_cntlid": 65520, 00:17:06.372 "method": "nvmf_create_subsystem", 00:17:06.372 "req_id": 1 00:17:06.372 } 00:17:06.372 Got JSON-RPC error response 00:17:06.372 response: 00:17:06.372 { 00:17:06.372 "code": -32602, 00:17:06.372 "message": "Invalid cntlid range [1-65520]" 00:17:06.372 }' 00:17:06.372 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:06.372 { 00:17:06.372 "nqn": "nqn.2016-06.io.spdk:cnode13534", 00:17:06.372 "max_cntlid": 65520, 00:17:06.372 "method": "nvmf_create_subsystem", 00:17:06.372 "req_id": 1 00:17:06.372 } 00:17:06.372 Got JSON-RPC error response 00:17:06.372 response: 00:17:06.372 { 00:17:06.372 "code": -32602, 00:17:06.372 "message": "Invalid cntlid range [1-65520]" 00:17:06.372 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.372 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15725 -i 6 -I 5 00:17:06.630 [2024-11-28 18:19:18.608289] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15725: invalid cntlid range [6-5] 00:17:06.630 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:06.630 { 00:17:06.630 "nqn": "nqn.2016-06.io.spdk:cnode15725", 00:17:06.630 "min_cntlid": 6, 00:17:06.630 "max_cntlid": 5, 00:17:06.630 "method": "nvmf_create_subsystem", 00:17:06.630 "req_id": 1 00:17:06.630 } 00:17:06.630 Got JSON-RPC error response 00:17:06.630 response: 00:17:06.630 { 00:17:06.630 "code": -32602, 00:17:06.630 "message": "Invalid cntlid range [6-5]" 00:17:06.630 }' 00:17:06.630 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:06.630 { 00:17:06.630 "nqn": "nqn.2016-06.io.spdk:cnode15725", 00:17:06.630 "min_cntlid": 6, 00:17:06.630 "max_cntlid": 5, 00:17:06.630 "method": "nvmf_create_subsystem", 00:17:06.630 "req_id": 1 00:17:06.630 } 00:17:06.630 Got JSON-RPC error response 00:17:06.630 response: 00:17:06.630 { 00:17:06.630 "code": -32602, 00:17:06.630 "message": "Invalid cntlid range [6-5]" 00:17:06.630 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.630 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:06.888 { 00:17:06.888 "name": "foobar", 00:17:06.888 "method": "nvmf_delete_target", 00:17:06.888 "req_id": 1 00:17:06.888 } 00:17:06.888 Got JSON-RPC error response 00:17:06.888 response: 00:17:06.888 { 00:17:06.888 "code": -32602, 00:17:06.888 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:06.888 }' 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:06.888 { 00:17:06.888 "name": "foobar", 00:17:06.888 "method": "nvmf_delete_target", 00:17:06.888 "req_id": 1 00:17:06.888 } 00:17:06.888 Got JSON-RPC error response 00:17:06.888 response: 00:17:06.888 { 00:17:06.888 "code": -32602, 00:17:06.888 "message": "The specified target doesn't exist, cannot delete it." 00:17:06.888 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.888 rmmod nvme_tcp 00:17:06.888 rmmod nvme_fabrics 00:17:06.888 rmmod nvme_keyring 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2453659 ']' 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2453659 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2453659 ']' 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2453659 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2453659 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2453659' 00:17:06.888 killing process with pid 2453659 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2453659 00:17:06.888 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2453659 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.269 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.178 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.178 00:17:10.178 real 0m10.702s 00:17:10.178 user 0m27.110s 00:17:10.178 sys 0m2.672s 00:17:10.178 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.178 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:10.178 ************************************ 00:17:10.178 END TEST nvmf_invalid 00:17:10.178 ************************************ 00:17:10.178 18:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:10.179 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.179 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.179 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.179 ************************************ 00:17:10.179 START TEST nvmf_connect_stress 00:17:10.179 ************************************ 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:10.179 * Looking for test storage... 00:17:10.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.179 --rc genhtml_branch_coverage=1 00:17:10.179 --rc genhtml_function_coverage=1 00:17:10.179 --rc genhtml_legend=1 00:17:10.179 --rc geninfo_all_blocks=1 00:17:10.179 --rc geninfo_unexecuted_blocks=1 00:17:10.179 00:17:10.179 ' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.179 --rc genhtml_branch_coverage=1 00:17:10.179 --rc genhtml_function_coverage=1 00:17:10.179 --rc genhtml_legend=1 00:17:10.179 --rc geninfo_all_blocks=1 00:17:10.179 --rc geninfo_unexecuted_blocks=1 00:17:10.179 00:17:10.179 ' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.179 --rc genhtml_branch_coverage=1 00:17:10.179 --rc genhtml_function_coverage=1 00:17:10.179 --rc genhtml_legend=1 00:17:10.179 --rc geninfo_all_blocks=1 00:17:10.179 --rc geninfo_unexecuted_blocks=1 00:17:10.179 00:17:10.179 ' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.179 --rc genhtml_branch_coverage=1 00:17:10.179 --rc genhtml_function_coverage=1 00:17:10.179 --rc genhtml_legend=1 00:17:10.179 --rc geninfo_all_blocks=1 00:17:10.179 --rc geninfo_unexecuted_blocks=1 00:17:10.179 00:17:10.179 ' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.179 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.180 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:12.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:12.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:12.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:12.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:12.087 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.088 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:12.346 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:12.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:17:12.347 00:17:12.347 --- 10.0.0.2 ping statistics --- 00:17:12.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.347 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:17:12.347 00:17:12.347 --- 10.0.0.1 ping statistics --- 00:17:12.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.347 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2456540 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2456540 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2456540 ']' 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.347 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.347 [2024-11-28 18:19:24.415877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:12.347 [2024-11-28 18:19:24.416028] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.607 [2024-11-28 18:19:24.573720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.866 [2024-11-28 18:19:24.718223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.866 [2024-11-28 18:19:24.718306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.866 [2024-11-28 18:19:24.718333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.866 [2024-11-28 18:19:24.718358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.866 [2024-11-28 18:19:24.718378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.866 [2024-11-28 18:19:24.721117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.866 [2024-11-28 18:19:24.721181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.866 [2024-11-28 18:19:24.721185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.432 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.432 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:13.432 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.432 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.432 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.433 [2024-11-28 18:19:25.392982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.433 [2024-11-28 18:19:25.413287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.433 NULL1 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2456681 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.433 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.004 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.004 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:14.004 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.004 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.004 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.264 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.264 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:14.264 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.264 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.265 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.524 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.524 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:14.524 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.524 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.524 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.782 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.782 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:14.782 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.782 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.782 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.043 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.043 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:15.043 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.043 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.043 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.612 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.612 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:15.612 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.612 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.612 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.871 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.871 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:15.871 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.871 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.871 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.130 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.130 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:16.130 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.130 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.130 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.388 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.388 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:16.388 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.388 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.388 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.646 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.646 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:16.646 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.646 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.646 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.215 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.215 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:17.215 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.215 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.215 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.475 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.475 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:17.475 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.475 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.475 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.734 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.734 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:17.734 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.734 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.734 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.992 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.992 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:17.992 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.992 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.992 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.250 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.250 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:18.250 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.510 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.510 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.770 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:18.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.771 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.029 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.029 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:19.029 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.029 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.029 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.287 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.287 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:19.287 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.287 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.287 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.855 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.855 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:19.855 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.855 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.855 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.114 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.114 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:20.114 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.114 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.114 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.372 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.372 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:20.372 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.372 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.372 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.633 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.633 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:20.633 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.633 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.633 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.891 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.891 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:20.891 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.891 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.891 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.457 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.457 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:21.457 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.457 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.457 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.716 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.716 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:21.716 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.716 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.717 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.978 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.978 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:21.978 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.978 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.978 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.238 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.238 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:22.238 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.238 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.238 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.496 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.496 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:22.497 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.497 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.497 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.064 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.064 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:23.064 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.064 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.064 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.323 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.323 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:23.323 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.323 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.323 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.581 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.581 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:23.581 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.581 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.581 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.839 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:23.839 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.839 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2456681 00:17:23.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2456681) - No such process 00:17:23.839 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2456681 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.840 rmmod nvme_tcp 00:17:23.840 rmmod nvme_fabrics 00:17:23.840 rmmod nvme_keyring 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2456540 ']' 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2456540 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2456540 ']' 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2456540 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.840 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456540 00:17:24.097 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:24.097 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:24.097 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456540' 00:17:24.097 killing process with pid 2456540 00:17:24.097 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2456540 00:17:24.097 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2456540 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.072 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.611 00:17:27.611 real 0m17.099s 00:17:27.611 user 0m42.647s 00:17:27.611 sys 0m6.141s 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.611 ************************************ 00:17:27.611 END TEST nvmf_connect_stress 00:17:27.611 ************************************ 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.611 ************************************ 00:17:27.611 START TEST nvmf_fused_ordering 00:17:27.611 ************************************ 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.611 * Looking for test storage... 00:17:27.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.611 --rc genhtml_branch_coverage=1 00:17:27.611 --rc genhtml_function_coverage=1 00:17:27.611 --rc genhtml_legend=1 00:17:27.611 --rc geninfo_all_blocks=1 00:17:27.611 --rc geninfo_unexecuted_blocks=1 00:17:27.611 00:17:27.611 ' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.611 --rc genhtml_branch_coverage=1 00:17:27.611 --rc genhtml_function_coverage=1 00:17:27.611 --rc genhtml_legend=1 00:17:27.611 --rc geninfo_all_blocks=1 00:17:27.611 --rc geninfo_unexecuted_blocks=1 00:17:27.611 00:17:27.611 ' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.611 --rc genhtml_branch_coverage=1 00:17:27.611 --rc genhtml_function_coverage=1 00:17:27.611 --rc genhtml_legend=1 00:17:27.611 --rc geninfo_all_blocks=1 00:17:27.611 --rc geninfo_unexecuted_blocks=1 00:17:27.611 00:17:27.611 ' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.611 --rc genhtml_branch_coverage=1 00:17:27.611 --rc genhtml_function_coverage=1 00:17:27.611 --rc genhtml_legend=1 00:17:27.611 --rc geninfo_all_blocks=1 00:17:27.611 --rc geninfo_unexecuted_blocks=1 00:17:27.611 00:17:27.611 ' 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.611 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.612 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:29.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:29.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:29.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:29.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.511 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:29.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:17:29.512 00:17:29.512 --- 10.0.0.2 ping statistics --- 00:17:29.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.512 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:17:29.512 00:17:29.512 --- 10.0.0.1 ping statistics --- 00:17:29.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.512 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2459990 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2459990 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2459990 ']' 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.512 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:29.770 [2024-11-28 18:19:41.614682] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:29.770 [2024-11-28 18:19:41.614839] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.770 [2024-11-28 18:19:41.769140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.028 [2024-11-28 18:19:41.912501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.028 [2024-11-28 18:19:41.912571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.029 [2024-11-28 18:19:41.912597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.029 [2024-11-28 18:19:41.912629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.029 [2024-11-28 18:19:41.912650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.029 [2024-11-28 18:19:41.914206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.594 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.594 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:30.594 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.594 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.594 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 [2024-11-28 18:19:42.706508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 [2024-11-28 18:19:42.722750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 NULL1 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.852 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:30.853 [2024-11-28 18:19:42.793308] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:30.853 [2024-11-28 18:19:42.793395] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2460144 ] 00:17:31.788 Attached to nqn.2016-06.io.spdk:cnode1 00:17:31.788 Namespace ID: 1 size: 1GB 00:17:31.788 fused_ordering(0) 00:17:31.788 fused_ordering(1) 00:17:31.788 fused_ordering(2) 00:17:31.788 fused_ordering(3) 00:17:31.788 fused_ordering(4) 00:17:31.788 fused_ordering(5) 00:17:31.788 fused_ordering(6) 00:17:31.788 fused_ordering(7) 00:17:31.788 fused_ordering(8) 00:17:31.788 fused_ordering(9) 00:17:31.788 fused_ordering(10) 00:17:31.788 fused_ordering(11) 00:17:31.788 fused_ordering(12) 00:17:31.788 fused_ordering(13) 00:17:31.788 fused_ordering(14) 00:17:31.788 fused_ordering(15) 00:17:31.788 fused_ordering(16) 00:17:31.788 fused_ordering(17) 00:17:31.788 fused_ordering(18) 00:17:31.788 fused_ordering(19) 00:17:31.788 fused_ordering(20) 00:17:31.788 fused_ordering(21) 00:17:31.788 fused_ordering(22) 00:17:31.788 fused_ordering(23) 00:17:31.788 fused_ordering(24) 00:17:31.788 fused_ordering(25) 00:17:31.788 fused_ordering(26) 00:17:31.788 fused_ordering(27) 00:17:31.788 fused_ordering(28) 00:17:31.788 fused_ordering(29) 00:17:31.788 fused_ordering(30) 00:17:31.788 fused_ordering(31) 00:17:31.788 fused_ordering(32) 00:17:31.788 fused_ordering(33) 00:17:31.788 fused_ordering(34) 00:17:31.788 fused_ordering(35) 00:17:31.788 fused_ordering(36) 00:17:31.788 fused_ordering(37) 00:17:31.788 fused_ordering(38) 00:17:31.788 fused_ordering(39) 00:17:31.788 fused_ordering(40) 00:17:31.788 fused_ordering(41) 00:17:31.788 fused_ordering(42) 00:17:31.788 fused_ordering(43) 00:17:31.788 fused_ordering(44) 00:17:31.788 fused_ordering(45) 00:17:31.788 fused_ordering(46) 00:17:31.788 fused_ordering(47) 00:17:31.788 fused_ordering(48) 00:17:31.788 fused_ordering(49) 00:17:31.788 fused_ordering(50) 00:17:31.788 fused_ordering(51) 00:17:31.788 fused_ordering(52) 00:17:31.788 fused_ordering(53) 00:17:31.788 fused_ordering(54) 00:17:31.788 fused_ordering(55) 00:17:31.788 fused_ordering(56) 00:17:31.788 fused_ordering(57) 00:17:31.788 fused_ordering(58) 00:17:31.788 fused_ordering(59) 00:17:31.788 fused_ordering(60) 00:17:31.788 fused_ordering(61) 00:17:31.788 fused_ordering(62) 00:17:31.788 fused_ordering(63) 00:17:31.788 fused_ordering(64) 00:17:31.788 fused_ordering(65) 00:17:31.788 fused_ordering(66) 00:17:31.788 fused_ordering(67) 00:17:31.788 fused_ordering(68) 00:17:31.788 fused_ordering(69) 00:17:31.788 fused_ordering(70) 00:17:31.788 fused_ordering(71) 00:17:31.788 fused_ordering(72) 00:17:31.788 fused_ordering(73) 00:17:31.788 fused_ordering(74) 00:17:31.788 fused_ordering(75) 00:17:31.788 fused_ordering(76) 00:17:31.788 fused_ordering(77) 00:17:31.788 fused_ordering(78) 00:17:31.788 fused_ordering(79) 00:17:31.788 fused_ordering(80) 00:17:31.788 fused_ordering(81) 00:17:31.788 fused_ordering(82) 00:17:31.788 fused_ordering(83) 00:17:31.788 fused_ordering(84) 00:17:31.788 fused_ordering(85) 00:17:31.788 fused_ordering(86) 00:17:31.788 fused_ordering(87) 00:17:31.788 fused_ordering(88) 00:17:31.788 fused_ordering(89) 00:17:31.788 fused_ordering(90) 00:17:31.788 fused_ordering(91) 00:17:31.788 fused_ordering(92) 00:17:31.788 fused_ordering(93) 00:17:31.788 fused_ordering(94) 00:17:31.788 fused_ordering(95) 00:17:31.788 fused_ordering(96) 00:17:31.788 fused_ordering(97) 00:17:31.788 fused_ordering(98) 00:17:31.788 fused_ordering(99) 00:17:31.788 fused_ordering(100) 00:17:31.788 fused_ordering(101) 00:17:31.788 fused_ordering(102) 00:17:31.788 fused_ordering(103) 00:17:31.788 fused_ordering(104) 00:17:31.788 fused_ordering(105) 00:17:31.788 fused_ordering(106) 00:17:31.788 fused_ordering(107) 00:17:31.788 fused_ordering(108) 00:17:31.788 fused_ordering(109) 00:17:31.788 fused_ordering(110) 00:17:31.788 fused_ordering(111) 00:17:31.788 fused_ordering(112) 00:17:31.788 fused_ordering(113) 00:17:31.788 fused_ordering(114) 00:17:31.788 fused_ordering(115) 00:17:31.788 fused_ordering(116) 00:17:31.788 fused_ordering(117) 00:17:31.788 fused_ordering(118) 00:17:31.788 fused_ordering(119) 00:17:31.788 fused_ordering(120) 00:17:31.788 fused_ordering(121) 00:17:31.788 fused_ordering(122) 00:17:31.788 fused_ordering(123) 00:17:31.788 fused_ordering(124) 00:17:31.788 fused_ordering(125) 00:17:31.788 fused_ordering(126) 00:17:31.788 fused_ordering(127) 00:17:31.788 fused_ordering(128) 00:17:31.788 fused_ordering(129) 00:17:31.788 fused_ordering(130) 00:17:31.788 fused_ordering(131) 00:17:31.788 fused_ordering(132) 00:17:31.788 fused_ordering(133) 00:17:31.788 fused_ordering(134) 00:17:31.788 fused_ordering(135) 00:17:31.788 fused_ordering(136) 00:17:31.788 fused_ordering(137) 00:17:31.788 fused_ordering(138) 00:17:31.788 fused_ordering(139) 00:17:31.788 fused_ordering(140) 00:17:31.788 fused_ordering(141) 00:17:31.788 fused_ordering(142) 00:17:31.788 fused_ordering(143) 00:17:31.788 fused_ordering(144) 00:17:31.788 fused_ordering(145) 00:17:31.788 fused_ordering(146) 00:17:31.788 fused_ordering(147) 00:17:31.788 fused_ordering(148) 00:17:31.788 fused_ordering(149) 00:17:31.788 fused_ordering(150) 00:17:31.788 fused_ordering(151) 00:17:31.788 fused_ordering(152) 00:17:31.788 fused_ordering(153) 00:17:31.788 fused_ordering(154) 00:17:31.788 fused_ordering(155) 00:17:31.788 fused_ordering(156) 00:17:31.788 fused_ordering(157) 00:17:31.788 fused_ordering(158) 00:17:31.788 fused_ordering(159) 00:17:31.788 fused_ordering(160) 00:17:31.788 fused_ordering(161) 00:17:31.788 fused_ordering(162) 00:17:31.788 fused_ordering(163) 00:17:31.788 fused_ordering(164) 00:17:31.788 fused_ordering(165) 00:17:31.788 fused_ordering(166) 00:17:31.788 fused_ordering(167) 00:17:31.788 fused_ordering(168) 00:17:31.788 fused_ordering(169) 00:17:31.788 fused_ordering(170) 00:17:31.788 fused_ordering(171) 00:17:31.788 fused_ordering(172) 00:17:31.788 fused_ordering(173) 00:17:31.788 fused_ordering(174) 00:17:31.788 fused_ordering(175) 00:17:31.788 fused_ordering(176) 00:17:31.788 fused_ordering(177) 00:17:31.788 fused_ordering(178) 00:17:31.788 fused_ordering(179) 00:17:31.788 fused_ordering(180) 00:17:31.788 fused_ordering(181) 00:17:31.788 fused_ordering(182) 00:17:31.788 fused_ordering(183) 00:17:31.788 fused_ordering(184) 00:17:31.788 fused_ordering(185) 00:17:31.788 fused_ordering(186) 00:17:31.788 fused_ordering(187) 00:17:31.788 fused_ordering(188) 00:17:31.788 fused_ordering(189) 00:17:31.788 fused_ordering(190) 00:17:31.788 fused_ordering(191) 00:17:31.788 fused_ordering(192) 00:17:31.788 fused_ordering(193) 00:17:31.788 fused_ordering(194) 00:17:31.788 fused_ordering(195) 00:17:31.788 fused_ordering(196) 00:17:31.788 fused_ordering(197) 00:17:31.789 fused_ordering(198) 00:17:31.789 fused_ordering(199) 00:17:31.789 fused_ordering(200) 00:17:31.789 fused_ordering(201) 00:17:31.789 fused_ordering(202) 00:17:31.789 fused_ordering(203) 00:17:31.789 fused_ordering(204) 00:17:31.789 fused_ordering(205) 00:17:32.048 fused_ordering(206) 00:17:32.048 fused_ordering(207) 00:17:32.048 fused_ordering(208) 00:17:32.048 fused_ordering(209) 00:17:32.048 fused_ordering(210) 00:17:32.048 fused_ordering(211) 00:17:32.048 fused_ordering(212) 00:17:32.048 fused_ordering(213) 00:17:32.048 fused_ordering(214) 00:17:32.048 fused_ordering(215) 00:17:32.048 fused_ordering(216) 00:17:32.048 fused_ordering(217) 00:17:32.048 fused_ordering(218) 00:17:32.048 fused_ordering(219) 00:17:32.048 fused_ordering(220) 00:17:32.048 fused_ordering(221) 00:17:32.048 fused_ordering(222) 00:17:32.048 fused_ordering(223) 00:17:32.048 fused_ordering(224) 00:17:32.048 fused_ordering(225) 00:17:32.048 fused_ordering(226) 00:17:32.048 fused_ordering(227) 00:17:32.048 fused_ordering(228) 00:17:32.048 fused_ordering(229) 00:17:32.048 fused_ordering(230) 00:17:32.048 fused_ordering(231) 00:17:32.048 fused_ordering(232) 00:17:32.048 fused_ordering(233) 00:17:32.048 fused_ordering(234) 00:17:32.048 fused_ordering(235) 00:17:32.048 fused_ordering(236) 00:17:32.048 fused_ordering(237) 00:17:32.048 fused_ordering(238) 00:17:32.048 fused_ordering(239) 00:17:32.048 fused_ordering(240) 00:17:32.048 fused_ordering(241) 00:17:32.048 fused_ordering(242) 00:17:32.048 fused_ordering(243) 00:17:32.048 fused_ordering(244) 00:17:32.048 fused_ordering(245) 00:17:32.048 fused_ordering(246) 00:17:32.048 fused_ordering(247) 00:17:32.048 fused_ordering(248) 00:17:32.048 fused_ordering(249) 00:17:32.048 fused_ordering(250) 00:17:32.048 fused_ordering(251) 00:17:32.048 fused_ordering(252) 00:17:32.048 fused_ordering(253) 00:17:32.048 fused_ordering(254) 00:17:32.048 fused_ordering(255) 00:17:32.048 fused_ordering(256) 00:17:32.048 fused_ordering(257) 00:17:32.048 fused_ordering(258) 00:17:32.048 fused_ordering(259) 00:17:32.048 fused_ordering(260) 00:17:32.048 fused_ordering(261) 00:17:32.048 fused_ordering(262) 00:17:32.048 fused_ordering(263) 00:17:32.048 fused_ordering(264) 00:17:32.048 fused_ordering(265) 00:17:32.048 fused_ordering(266) 00:17:32.048 fused_ordering(267) 00:17:32.048 fused_ordering(268) 00:17:32.048 fused_ordering(269) 00:17:32.048 fused_ordering(270) 00:17:32.048 fused_ordering(271) 00:17:32.048 fused_ordering(272) 00:17:32.048 fused_ordering(273) 00:17:32.048 fused_ordering(274) 00:17:32.048 fused_ordering(275) 00:17:32.048 fused_ordering(276) 00:17:32.048 fused_ordering(277) 00:17:32.048 fused_ordering(278) 00:17:32.048 fused_ordering(279) 00:17:32.048 fused_ordering(280) 00:17:32.048 fused_ordering(281) 00:17:32.048 fused_ordering(282) 00:17:32.048 fused_ordering(283) 00:17:32.048 fused_ordering(284) 00:17:32.048 fused_ordering(285) 00:17:32.048 fused_ordering(286) 00:17:32.048 fused_ordering(287) 00:17:32.048 fused_ordering(288) 00:17:32.048 fused_ordering(289) 00:17:32.048 fused_ordering(290) 00:17:32.048 fused_ordering(291) 00:17:32.048 fused_ordering(292) 00:17:32.048 fused_ordering(293) 00:17:32.048 fused_ordering(294) 00:17:32.048 fused_ordering(295) 00:17:32.048 fused_ordering(296) 00:17:32.048 fused_ordering(297) 00:17:32.048 fused_ordering(298) 00:17:32.048 fused_ordering(299) 00:17:32.048 fused_ordering(300) 00:17:32.048 fused_ordering(301) 00:17:32.048 fused_ordering(302) 00:17:32.048 fused_ordering(303) 00:17:32.048 fused_ordering(304) 00:17:32.048 fused_ordering(305) 00:17:32.048 fused_ordering(306) 00:17:32.048 fused_ordering(307) 00:17:32.048 fused_ordering(308) 00:17:32.048 fused_ordering(309) 00:17:32.048 fused_ordering(310) 00:17:32.048 fused_ordering(311) 00:17:32.048 fused_ordering(312) 00:17:32.048 fused_ordering(313) 00:17:32.048 fused_ordering(314) 00:17:32.048 fused_ordering(315) 00:17:32.048 fused_ordering(316) 00:17:32.048 fused_ordering(317) 00:17:32.048 fused_ordering(318) 00:17:32.048 fused_ordering(319) 00:17:32.048 fused_ordering(320) 00:17:32.048 fused_ordering(321) 00:17:32.048 fused_ordering(322) 00:17:32.048 fused_ordering(323) 00:17:32.048 fused_ordering(324) 00:17:32.048 fused_ordering(325) 00:17:32.048 fused_ordering(326) 00:17:32.048 fused_ordering(327) 00:17:32.048 fused_ordering(328) 00:17:32.048 fused_ordering(329) 00:17:32.048 fused_ordering(330) 00:17:32.048 fused_ordering(331) 00:17:32.048 fused_ordering(332) 00:17:32.048 fused_ordering(333) 00:17:32.048 fused_ordering(334) 00:17:32.048 fused_ordering(335) 00:17:32.048 fused_ordering(336) 00:17:32.048 fused_ordering(337) 00:17:32.048 fused_ordering(338) 00:17:32.048 fused_ordering(339) 00:17:32.048 fused_ordering(340) 00:17:32.048 fused_ordering(341) 00:17:32.048 fused_ordering(342) 00:17:32.048 fused_ordering(343) 00:17:32.048 fused_ordering(344) 00:17:32.048 fused_ordering(345) 00:17:32.048 fused_ordering(346) 00:17:32.048 fused_ordering(347) 00:17:32.048 fused_ordering(348) 00:17:32.048 fused_ordering(349) 00:17:32.048 fused_ordering(350) 00:17:32.048 fused_ordering(351) 00:17:32.048 fused_ordering(352) 00:17:32.048 fused_ordering(353) 00:17:32.048 fused_ordering(354) 00:17:32.048 fused_ordering(355) 00:17:32.048 fused_ordering(356) 00:17:32.048 fused_ordering(357) 00:17:32.048 fused_ordering(358) 00:17:32.048 fused_ordering(359) 00:17:32.048 fused_ordering(360) 00:17:32.048 fused_ordering(361) 00:17:32.048 fused_ordering(362) 00:17:32.048 fused_ordering(363) 00:17:32.048 fused_ordering(364) 00:17:32.048 fused_ordering(365) 00:17:32.048 fused_ordering(366) 00:17:32.048 fused_ordering(367) 00:17:32.048 fused_ordering(368) 00:17:32.048 fused_ordering(369) 00:17:32.048 fused_ordering(370) 00:17:32.048 fused_ordering(371) 00:17:32.048 fused_ordering(372) 00:17:32.048 fused_ordering(373) 00:17:32.048 fused_ordering(374) 00:17:32.048 fused_ordering(375) 00:17:32.048 fused_ordering(376) 00:17:32.048 fused_ordering(377) 00:17:32.048 fused_ordering(378) 00:17:32.048 fused_ordering(379) 00:17:32.048 fused_ordering(380) 00:17:32.048 fused_ordering(381) 00:17:32.048 fused_ordering(382) 00:17:32.049 fused_ordering(383) 00:17:32.049 fused_ordering(384) 00:17:32.049 fused_ordering(385) 00:17:32.049 fused_ordering(386) 00:17:32.049 fused_ordering(387) 00:17:32.049 fused_ordering(388) 00:17:32.049 fused_ordering(389) 00:17:32.049 fused_ordering(390) 00:17:32.049 fused_ordering(391) 00:17:32.049 fused_ordering(392) 00:17:32.049 fused_ordering(393) 00:17:32.049 fused_ordering(394) 00:17:32.049 fused_ordering(395) 00:17:32.049 fused_ordering(396) 00:17:32.049 fused_ordering(397) 00:17:32.049 fused_ordering(398) 00:17:32.049 fused_ordering(399) 00:17:32.049 fused_ordering(400) 00:17:32.049 fused_ordering(401) 00:17:32.049 fused_ordering(402) 00:17:32.049 fused_ordering(403) 00:17:32.049 fused_ordering(404) 00:17:32.049 fused_ordering(405) 00:17:32.049 fused_ordering(406) 00:17:32.049 fused_ordering(407) 00:17:32.049 fused_ordering(408) 00:17:32.049 fused_ordering(409) 00:17:32.049 fused_ordering(410) 00:17:32.616 fused_ordering(411) 00:17:32.616 fused_ordering(412) 00:17:32.616 fused_ordering(413) 00:17:32.616 fused_ordering(414) 00:17:32.616 fused_ordering(415) 00:17:32.616 fused_ordering(416) 00:17:32.616 fused_ordering(417) 00:17:32.616 fused_ordering(418) 00:17:32.616 fused_ordering(419) 00:17:32.616 fused_ordering(420) 00:17:32.616 fused_ordering(421) 00:17:32.616 fused_ordering(422) 00:17:32.616 fused_ordering(423) 00:17:32.616 fused_ordering(424) 00:17:32.616 fused_ordering(425) 00:17:32.616 fused_ordering(426) 00:17:32.616 fused_ordering(427) 00:17:32.616 fused_ordering(428) 00:17:32.616 fused_ordering(429) 00:17:32.616 fused_ordering(430) 00:17:32.616 fused_ordering(431) 00:17:32.616 fused_ordering(432) 00:17:32.616 fused_ordering(433) 00:17:32.616 fused_ordering(434) 00:17:32.616 fused_ordering(435) 00:17:32.616 fused_ordering(436) 00:17:32.616 fused_ordering(437) 00:17:32.616 fused_ordering(438) 00:17:32.617 fused_ordering(439) 00:17:32.617 fused_ordering(440) 00:17:32.617 fused_ordering(441) 00:17:32.617 fused_ordering(442) 00:17:32.617 fused_ordering(443) 00:17:32.617 fused_ordering(444) 00:17:32.617 fused_ordering(445) 00:17:32.617 fused_ordering(446) 00:17:32.617 fused_ordering(447) 00:17:32.617 fused_ordering(448) 00:17:32.617 fused_ordering(449) 00:17:32.617 fused_ordering(450) 00:17:32.617 fused_ordering(451) 00:17:32.617 fused_ordering(452) 00:17:32.617 fused_ordering(453) 00:17:32.617 fused_ordering(454) 00:17:32.617 fused_ordering(455) 00:17:32.617 fused_ordering(456) 00:17:32.617 fused_ordering(457) 00:17:32.617 fused_ordering(458) 00:17:32.617 fused_ordering(459) 00:17:32.617 fused_ordering(460) 00:17:32.617 fused_ordering(461) 00:17:32.617 fused_ordering(462) 00:17:32.617 fused_ordering(463) 00:17:32.617 fused_ordering(464) 00:17:32.617 fused_ordering(465) 00:17:32.617 fused_ordering(466) 00:17:32.617 fused_ordering(467) 00:17:32.617 fused_ordering(468) 00:17:32.617 fused_ordering(469) 00:17:32.617 fused_ordering(470) 00:17:32.617 fused_ordering(471) 00:17:32.617 fused_ordering(472) 00:17:32.617 fused_ordering(473) 00:17:32.617 fused_ordering(474) 00:17:32.617 fused_ordering(475) 00:17:32.617 fused_ordering(476) 00:17:32.617 fused_ordering(477) 00:17:32.617 fused_ordering(478) 00:17:32.617 fused_ordering(479) 00:17:32.617 fused_ordering(480) 00:17:32.617 fused_ordering(481) 00:17:32.617 fused_ordering(482) 00:17:32.617 fused_ordering(483) 00:17:32.617 fused_ordering(484) 00:17:32.617 fused_ordering(485) 00:17:32.617 fused_ordering(486) 00:17:32.617 fused_ordering(487) 00:17:32.617 fused_ordering(488) 00:17:32.617 fused_ordering(489) 00:17:32.617 fused_ordering(490) 00:17:32.617 fused_ordering(491) 00:17:32.617 fused_ordering(492) 00:17:32.617 fused_ordering(493) 00:17:32.617 fused_ordering(494) 00:17:32.617 fused_ordering(495) 00:17:32.617 fused_ordering(496) 00:17:32.617 fused_ordering(497) 00:17:32.617 fused_ordering(498) 00:17:32.617 fused_ordering(499) 00:17:32.617 fused_ordering(500) 00:17:32.617 fused_ordering(501) 00:17:32.617 fused_ordering(502) 00:17:32.617 fused_ordering(503) 00:17:32.617 fused_ordering(504) 00:17:32.617 fused_ordering(505) 00:17:32.617 fused_ordering(506) 00:17:32.617 fused_ordering(507) 00:17:32.617 fused_ordering(508) 00:17:32.617 fused_ordering(509) 00:17:32.617 fused_ordering(510) 00:17:32.617 fused_ordering(511) 00:17:32.617 fused_ordering(512) 00:17:32.617 fused_ordering(513) 00:17:32.617 fused_ordering(514) 00:17:32.617 fused_ordering(515) 00:17:32.617 fused_ordering(516) 00:17:32.617 fused_ordering(517) 00:17:32.617 fused_ordering(518) 00:17:32.617 fused_ordering(519) 00:17:32.617 fused_ordering(520) 00:17:32.617 fused_ordering(521) 00:17:32.617 fused_ordering(522) 00:17:32.617 fused_ordering(523) 00:17:32.617 fused_ordering(524) 00:17:32.617 fused_ordering(525) 00:17:32.617 fused_ordering(526) 00:17:32.617 fused_ordering(527) 00:17:32.617 fused_ordering(528) 00:17:32.617 fused_ordering(529) 00:17:32.617 fused_ordering(530) 00:17:32.617 fused_ordering(531) 00:17:32.617 fused_ordering(532) 00:17:32.617 fused_ordering(533) 00:17:32.617 fused_ordering(534) 00:17:32.617 fused_ordering(535) 00:17:32.617 fused_ordering(536) 00:17:32.617 fused_ordering(537) 00:17:32.617 fused_ordering(538) 00:17:32.617 fused_ordering(539) 00:17:32.617 fused_ordering(540) 00:17:32.617 fused_ordering(541) 00:17:32.617 fused_ordering(542) 00:17:32.617 fused_ordering(543) 00:17:32.617 fused_ordering(544) 00:17:32.617 fused_ordering(545) 00:17:32.617 fused_ordering(546) 00:17:32.617 fused_ordering(547) 00:17:32.617 fused_ordering(548) 00:17:32.617 fused_ordering(549) 00:17:32.617 fused_ordering(550) 00:17:32.617 fused_ordering(551) 00:17:32.617 fused_ordering(552) 00:17:32.617 fused_ordering(553) 00:17:32.617 fused_ordering(554) 00:17:32.617 fused_ordering(555) 00:17:32.617 fused_ordering(556) 00:17:32.617 fused_ordering(557) 00:17:32.617 fused_ordering(558) 00:17:32.617 fused_ordering(559) 00:17:32.617 fused_ordering(560) 00:17:32.617 fused_ordering(561) 00:17:32.617 fused_ordering(562) 00:17:32.617 fused_ordering(563) 00:17:32.617 fused_ordering(564) 00:17:32.617 fused_ordering(565) 00:17:32.617 fused_ordering(566) 00:17:32.617 fused_ordering(567) 00:17:32.617 fused_ordering(568) 00:17:32.617 fused_ordering(569) 00:17:32.617 fused_ordering(570) 00:17:32.617 fused_ordering(571) 00:17:32.617 fused_ordering(572) 00:17:32.617 fused_ordering(573) 00:17:32.617 fused_ordering(574) 00:17:32.617 fused_ordering(575) 00:17:32.617 fused_ordering(576) 00:17:32.617 fused_ordering(577) 00:17:32.617 fused_ordering(578) 00:17:32.617 fused_ordering(579) 00:17:32.617 fused_ordering(580) 00:17:32.617 fused_ordering(581) 00:17:32.617 fused_ordering(582) 00:17:32.617 fused_ordering(583) 00:17:32.617 fused_ordering(584) 00:17:32.617 fused_ordering(585) 00:17:32.617 fused_ordering(586) 00:17:32.617 fused_ordering(587) 00:17:32.617 fused_ordering(588) 00:17:32.617 fused_ordering(589) 00:17:32.617 fused_ordering(590) 00:17:32.617 fused_ordering(591) 00:17:32.617 fused_ordering(592) 00:17:32.617 fused_ordering(593) 00:17:32.617 fused_ordering(594) 00:17:32.617 fused_ordering(595) 00:17:32.617 fused_ordering(596) 00:17:32.617 fused_ordering(597) 00:17:32.617 fused_ordering(598) 00:17:32.617 fused_ordering(599) 00:17:32.617 fused_ordering(600) 00:17:32.617 fused_ordering(601) 00:17:32.617 fused_ordering(602) 00:17:32.617 fused_ordering(603) 00:17:32.617 fused_ordering(604) 00:17:32.617 fused_ordering(605) 00:17:32.617 fused_ordering(606) 00:17:32.617 fused_ordering(607) 00:17:32.617 fused_ordering(608) 00:17:32.617 fused_ordering(609) 00:17:32.617 fused_ordering(610) 00:17:32.617 fused_ordering(611) 00:17:32.617 fused_ordering(612) 00:17:32.617 fused_ordering(613) 00:17:32.617 fused_ordering(614) 00:17:32.617 fused_ordering(615) 00:17:33.184 fused_ordering(616) 00:17:33.184 fused_ordering(617) 00:17:33.184 fused_ordering(618) 00:17:33.184 fused_ordering(619) 00:17:33.184 fused_ordering(620) 00:17:33.184 fused_ordering(621) 00:17:33.184 fused_ordering(622) 00:17:33.184 fused_ordering(623) 00:17:33.184 fused_ordering(624) 00:17:33.184 fused_ordering(625) 00:17:33.184 fused_ordering(626) 00:17:33.184 fused_ordering(627) 00:17:33.184 fused_ordering(628) 00:17:33.184 fused_ordering(629) 00:17:33.184 fused_ordering(630) 00:17:33.184 fused_ordering(631) 00:17:33.184 fused_ordering(632) 00:17:33.184 fused_ordering(633) 00:17:33.184 fused_ordering(634) 00:17:33.184 fused_ordering(635) 00:17:33.184 fused_ordering(636) 00:17:33.184 fused_ordering(637) 00:17:33.184 fused_ordering(638) 00:17:33.184 fused_ordering(639) 00:17:33.184 fused_ordering(640) 00:17:33.184 fused_ordering(641) 00:17:33.184 fused_ordering(642) 00:17:33.184 fused_ordering(643) 00:17:33.184 fused_ordering(644) 00:17:33.184 fused_ordering(645) 00:17:33.184 fused_ordering(646) 00:17:33.184 fused_ordering(647) 00:17:33.184 fused_ordering(648) 00:17:33.184 fused_ordering(649) 00:17:33.184 fused_ordering(650) 00:17:33.184 fused_ordering(651) 00:17:33.184 fused_ordering(652) 00:17:33.184 fused_ordering(653) 00:17:33.184 fused_ordering(654) 00:17:33.185 fused_ordering(655) 00:17:33.185 fused_ordering(656) 00:17:33.185 fused_ordering(657) 00:17:33.185 fused_ordering(658) 00:17:33.185 fused_ordering(659) 00:17:33.185 fused_ordering(660) 00:17:33.185 fused_ordering(661) 00:17:33.185 fused_ordering(662) 00:17:33.185 fused_ordering(663) 00:17:33.185 fused_ordering(664) 00:17:33.185 fused_ordering(665) 00:17:33.185 fused_ordering(666) 00:17:33.185 fused_ordering(667) 00:17:33.185 fused_ordering(668) 00:17:33.185 fused_ordering(669) 00:17:33.185 fused_ordering(670) 00:17:33.185 fused_ordering(671) 00:17:33.185 fused_ordering(672) 00:17:33.185 fused_ordering(673) 00:17:33.185 fused_ordering(674) 00:17:33.185 fused_ordering(675) 00:17:33.185 fused_ordering(676) 00:17:33.185 fused_ordering(677) 00:17:33.185 fused_ordering(678) 00:17:33.185 fused_ordering(679) 00:17:33.185 fused_ordering(680) 00:17:33.185 fused_ordering(681) 00:17:33.185 fused_ordering(682) 00:17:33.185 fused_ordering(683) 00:17:33.185 fused_ordering(684) 00:17:33.185 fused_ordering(685) 00:17:33.185 fused_ordering(686) 00:17:33.185 fused_ordering(687) 00:17:33.185 fused_ordering(688) 00:17:33.185 fused_ordering(689) 00:17:33.185 fused_ordering(690) 00:17:33.185 fused_ordering(691) 00:17:33.185 fused_ordering(692) 00:17:33.185 fused_ordering(693) 00:17:33.185 fused_ordering(694) 00:17:33.185 fused_ordering(695) 00:17:33.185 fused_ordering(696) 00:17:33.185 fused_ordering(697) 00:17:33.185 fused_ordering(698) 00:17:33.185 fused_ordering(699) 00:17:33.185 fused_ordering(700) 00:17:33.185 fused_ordering(701) 00:17:33.185 fused_ordering(702) 00:17:33.185 fused_ordering(703) 00:17:33.185 fused_ordering(704) 00:17:33.185 fused_ordering(705) 00:17:33.185 fused_ordering(706) 00:17:33.185 fused_ordering(707) 00:17:33.185 fused_ordering(708) 00:17:33.185 fused_ordering(709) 00:17:33.185 fused_ordering(710) 00:17:33.185 fused_ordering(711) 00:17:33.185 fused_ordering(712) 00:17:33.185 fused_ordering(713) 00:17:33.185 fused_ordering(714) 00:17:33.185 fused_ordering(715) 00:17:33.185 fused_ordering(716) 00:17:33.185 fused_ordering(717) 00:17:33.185 fused_ordering(718) 00:17:33.185 fused_ordering(719) 00:17:33.185 fused_ordering(720) 00:17:33.185 fused_ordering(721) 00:17:33.185 fused_ordering(722) 00:17:33.185 fused_ordering(723) 00:17:33.185 fused_ordering(724) 00:17:33.185 fused_ordering(725) 00:17:33.185 fused_ordering(726) 00:17:33.185 fused_ordering(727) 00:17:33.185 fused_ordering(728) 00:17:33.185 fused_ordering(729) 00:17:33.185 fused_ordering(730) 00:17:33.185 fused_ordering(731) 00:17:33.185 fused_ordering(732) 00:17:33.185 fused_ordering(733) 00:17:33.185 fused_ordering(734) 00:17:33.185 fused_ordering(735) 00:17:33.185 fused_ordering(736) 00:17:33.185 fused_ordering(737) 00:17:33.185 fused_ordering(738) 00:17:33.185 fused_ordering(739) 00:17:33.185 fused_ordering(740) 00:17:33.185 fused_ordering(741) 00:17:33.185 fused_ordering(742) 00:17:33.185 fused_ordering(743) 00:17:33.185 fused_ordering(744) 00:17:33.185 fused_ordering(745) 00:17:33.185 fused_ordering(746) 00:17:33.185 fused_ordering(747) 00:17:33.185 fused_ordering(748) 00:17:33.185 fused_ordering(749) 00:17:33.185 fused_ordering(750) 00:17:33.185 fused_ordering(751) 00:17:33.185 fused_ordering(752) 00:17:33.185 fused_ordering(753) 00:17:33.185 fused_ordering(754) 00:17:33.185 fused_ordering(755) 00:17:33.185 fused_ordering(756) 00:17:33.185 fused_ordering(757) 00:17:33.185 fused_ordering(758) 00:17:33.185 fused_ordering(759) 00:17:33.185 fused_ordering(760) 00:17:33.185 fused_ordering(761) 00:17:33.185 fused_ordering(762) 00:17:33.185 fused_ordering(763) 00:17:33.185 fused_ordering(764) 00:17:33.185 fused_ordering(765) 00:17:33.185 fused_ordering(766) 00:17:33.185 fused_ordering(767) 00:17:33.185 fused_ordering(768) 00:17:33.185 fused_ordering(769) 00:17:33.185 fused_ordering(770) 00:17:33.185 fused_ordering(771) 00:17:33.185 fused_ordering(772) 00:17:33.185 fused_ordering(773) 00:17:33.185 fused_ordering(774) 00:17:33.185 fused_ordering(775) 00:17:33.185 fused_ordering(776) 00:17:33.185 fused_ordering(777) 00:17:33.185 fused_ordering(778) 00:17:33.185 fused_ordering(779) 00:17:33.185 fused_ordering(780) 00:17:33.185 fused_ordering(781) 00:17:33.185 fused_ordering(782) 00:17:33.185 fused_ordering(783) 00:17:33.185 fused_ordering(784) 00:17:33.185 fused_ordering(785) 00:17:33.185 fused_ordering(786) 00:17:33.185 fused_ordering(787) 00:17:33.185 fused_ordering(788) 00:17:33.185 fused_ordering(789) 00:17:33.185 fused_ordering(790) 00:17:33.185 fused_ordering(791) 00:17:33.185 fused_ordering(792) 00:17:33.185 fused_ordering(793) 00:17:33.185 fused_ordering(794) 00:17:33.185 fused_ordering(795) 00:17:33.185 fused_ordering(796) 00:17:33.185 fused_ordering(797) 00:17:33.185 fused_ordering(798) 00:17:33.185 fused_ordering(799) 00:17:33.185 fused_ordering(800) 00:17:33.185 fused_ordering(801) 00:17:33.185 fused_ordering(802) 00:17:33.185 fused_ordering(803) 00:17:33.185 fused_ordering(804) 00:17:33.185 fused_ordering(805) 00:17:33.185 fused_ordering(806) 00:17:33.185 fused_ordering(807) 00:17:33.185 fused_ordering(808) 00:17:33.185 fused_ordering(809) 00:17:33.185 fused_ordering(810) 00:17:33.185 fused_ordering(811) 00:17:33.185 fused_ordering(812) 00:17:33.185 fused_ordering(813) 00:17:33.185 fused_ordering(814) 00:17:33.185 fused_ordering(815) 00:17:33.185 fused_ordering(816) 00:17:33.185 fused_ordering(817) 00:17:33.185 fused_ordering(818) 00:17:33.185 fused_ordering(819) 00:17:33.185 fused_ordering(820) 00:17:34.120 fused_ordering(821) 00:17:34.120 fused_ordering(822) 00:17:34.120 fused_ordering(823) 00:17:34.120 fused_ordering(824) 00:17:34.120 fused_ordering(825) 00:17:34.120 fused_ordering(826) 00:17:34.120 fused_ordering(827) 00:17:34.120 fused_ordering(828) 00:17:34.120 fused_ordering(829) 00:17:34.120 fused_ordering(830) 00:17:34.120 fused_ordering(831) 00:17:34.120 fused_ordering(832) 00:17:34.120 fused_ordering(833) 00:17:34.120 fused_ordering(834) 00:17:34.120 fused_ordering(835) 00:17:34.120 fused_ordering(836) 00:17:34.120 fused_ordering(837) 00:17:34.120 fused_ordering(838) 00:17:34.120 fused_ordering(839) 00:17:34.120 fused_ordering(840) 00:17:34.120 fused_ordering(841) 00:17:34.120 fused_ordering(842) 00:17:34.120 fused_ordering(843) 00:17:34.120 fused_ordering(844) 00:17:34.120 fused_ordering(845) 00:17:34.120 fused_ordering(846) 00:17:34.120 fused_ordering(847) 00:17:34.120 fused_ordering(848) 00:17:34.120 fused_ordering(849) 00:17:34.120 fused_ordering(850) 00:17:34.120 fused_ordering(851) 00:17:34.120 fused_ordering(852) 00:17:34.120 fused_ordering(853) 00:17:34.120 fused_ordering(854) 00:17:34.120 fused_ordering(855) 00:17:34.120 fused_ordering(856) 00:17:34.120 fused_ordering(857) 00:17:34.120 fused_ordering(858) 00:17:34.120 fused_ordering(859) 00:17:34.120 fused_ordering(860) 00:17:34.120 fused_ordering(861) 00:17:34.120 fused_ordering(862) 00:17:34.120 fused_ordering(863) 00:17:34.120 fused_ordering(864) 00:17:34.120 fused_ordering(865) 00:17:34.120 fused_ordering(866) 00:17:34.120 fused_ordering(867) 00:17:34.120 fused_ordering(868) 00:17:34.120 fused_ordering(869) 00:17:34.120 fused_ordering(870) 00:17:34.120 fused_ordering(871) 00:17:34.120 fused_ordering(872) 00:17:34.120 fused_ordering(873) 00:17:34.120 fused_ordering(874) 00:17:34.120 fused_ordering(875) 00:17:34.120 fused_ordering(876) 00:17:34.120 fused_ordering(877) 00:17:34.120 fused_ordering(878) 00:17:34.120 fused_ordering(879) 00:17:34.120 fused_ordering(880) 00:17:34.120 fused_ordering(881) 00:17:34.120 fused_ordering(882) 00:17:34.120 fused_ordering(883) 00:17:34.120 fused_ordering(884) 00:17:34.120 fused_ordering(885) 00:17:34.120 fused_ordering(886) 00:17:34.120 fused_ordering(887) 00:17:34.120 fused_ordering(888) 00:17:34.120 fused_ordering(889) 00:17:34.120 fused_ordering(890) 00:17:34.120 fused_ordering(891) 00:17:34.120 fused_ordering(892) 00:17:34.120 fused_ordering(893) 00:17:34.120 fused_ordering(894) 00:17:34.120 fused_ordering(895) 00:17:34.120 fused_ordering(896) 00:17:34.120 fused_ordering(897) 00:17:34.120 fused_ordering(898) 00:17:34.120 fused_ordering(899) 00:17:34.120 fused_ordering(900) 00:17:34.120 fused_ordering(901) 00:17:34.120 fused_ordering(902) 00:17:34.120 fused_ordering(903) 00:17:34.120 fused_ordering(904) 00:17:34.120 fused_ordering(905) 00:17:34.120 fused_ordering(906) 00:17:34.120 fused_ordering(907) 00:17:34.120 fused_ordering(908) 00:17:34.120 fused_ordering(909) 00:17:34.120 fused_ordering(910) 00:17:34.120 fused_ordering(911) 00:17:34.120 fused_ordering(912) 00:17:34.120 fused_ordering(913) 00:17:34.120 fused_ordering(914) 00:17:34.120 fused_ordering(915) 00:17:34.120 fused_ordering(916) 00:17:34.120 fused_ordering(917) 00:17:34.120 fused_ordering(918) 00:17:34.120 fused_ordering(919) 00:17:34.120 fused_ordering(920) 00:17:34.120 fused_ordering(921) 00:17:34.120 fused_ordering(922) 00:17:34.120 fused_ordering(923) 00:17:34.120 fused_ordering(924) 00:17:34.120 fused_ordering(925) 00:17:34.120 fused_ordering(926) 00:17:34.120 fused_ordering(927) 00:17:34.121 fused_ordering(928) 00:17:34.121 fused_ordering(929) 00:17:34.121 fused_ordering(930) 00:17:34.121 fused_ordering(931) 00:17:34.121 fused_ordering(932) 00:17:34.121 fused_ordering(933) 00:17:34.121 fused_ordering(934) 00:17:34.121 fused_ordering(935) 00:17:34.121 fused_ordering(936) 00:17:34.121 fused_ordering(937) 00:17:34.121 fused_ordering(938) 00:17:34.121 fused_ordering(939) 00:17:34.121 fused_ordering(940) 00:17:34.121 fused_ordering(941) 00:17:34.121 fused_ordering(942) 00:17:34.121 fused_ordering(943) 00:17:34.121 fused_ordering(944) 00:17:34.121 fused_ordering(945) 00:17:34.121 fused_ordering(946) 00:17:34.121 fused_ordering(947) 00:17:34.121 fused_ordering(948) 00:17:34.121 fused_ordering(949) 00:17:34.121 fused_ordering(950) 00:17:34.121 fused_ordering(951) 00:17:34.121 fused_ordering(952) 00:17:34.121 fused_ordering(953) 00:17:34.121 fused_ordering(954) 00:17:34.121 fused_ordering(955) 00:17:34.121 fused_ordering(956) 00:17:34.121 fused_ordering(957) 00:17:34.121 fused_ordering(958) 00:17:34.121 fused_ordering(959) 00:17:34.121 fused_ordering(960) 00:17:34.121 fused_ordering(961) 00:17:34.121 fused_ordering(962) 00:17:34.121 fused_ordering(963) 00:17:34.121 fused_ordering(964) 00:17:34.121 fused_ordering(965) 00:17:34.121 fused_ordering(966) 00:17:34.121 fused_ordering(967) 00:17:34.121 fused_ordering(968) 00:17:34.121 fused_ordering(969) 00:17:34.121 fused_ordering(970) 00:17:34.121 fused_ordering(971) 00:17:34.121 fused_ordering(972) 00:17:34.121 fused_ordering(973) 00:17:34.121 fused_ordering(974) 00:17:34.121 fused_ordering(975) 00:17:34.121 fused_ordering(976) 00:17:34.121 fused_ordering(977) 00:17:34.121 fused_ordering(978) 00:17:34.121 fused_ordering(979) 00:17:34.121 fused_ordering(980) 00:17:34.121 fused_ordering(981) 00:17:34.121 fused_ordering(982) 00:17:34.121 fused_ordering(983) 00:17:34.121 fused_ordering(984) 00:17:34.121 fused_ordering(985) 00:17:34.121 fused_ordering(986) 00:17:34.121 fused_ordering(987) 00:17:34.121 fused_ordering(988) 00:17:34.121 fused_ordering(989) 00:17:34.121 fused_ordering(990) 00:17:34.121 fused_ordering(991) 00:17:34.121 fused_ordering(992) 00:17:34.121 fused_ordering(993) 00:17:34.121 fused_ordering(994) 00:17:34.121 fused_ordering(995) 00:17:34.121 fused_ordering(996) 00:17:34.121 fused_ordering(997) 00:17:34.121 fused_ordering(998) 00:17:34.121 fused_ordering(999) 00:17:34.121 fused_ordering(1000) 00:17:34.121 fused_ordering(1001) 00:17:34.121 fused_ordering(1002) 00:17:34.121 fused_ordering(1003) 00:17:34.121 fused_ordering(1004) 00:17:34.121 fused_ordering(1005) 00:17:34.121 fused_ordering(1006) 00:17:34.121 fused_ordering(1007) 00:17:34.121 fused_ordering(1008) 00:17:34.121 fused_ordering(1009) 00:17:34.121 fused_ordering(1010) 00:17:34.121 fused_ordering(1011) 00:17:34.121 fused_ordering(1012) 00:17:34.121 fused_ordering(1013) 00:17:34.121 fused_ordering(1014) 00:17:34.121 fused_ordering(1015) 00:17:34.121 fused_ordering(1016) 00:17:34.121 fused_ordering(1017) 00:17:34.121 fused_ordering(1018) 00:17:34.121 fused_ordering(1019) 00:17:34.121 fused_ordering(1020) 00:17:34.121 fused_ordering(1021) 00:17:34.121 fused_ordering(1022) 00:17:34.121 fused_ordering(1023) 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.121 rmmod nvme_tcp 00:17:34.121 rmmod nvme_fabrics 00:17:34.121 rmmod nvme_keyring 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2459990 ']' 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2459990 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2459990 ']' 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2459990 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2459990 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2459990' 00:17:34.121 killing process with pid 2459990 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2459990 00:17:34.121 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2459990 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.497 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.397 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.398 00:17:37.398 real 0m10.249s 00:17:37.398 user 0m8.652s 00:17:37.398 sys 0m3.617s 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.398 ************************************ 00:17:37.398 END TEST nvmf_fused_ordering 00:17:37.398 ************************************ 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.398 ************************************ 00:17:37.398 START TEST nvmf_ns_masking 00:17:37.398 ************************************ 00:17:37.398 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:37.657 * Looking for test storage... 00:17:37.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:37.657 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:37.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.658 --rc genhtml_branch_coverage=1 00:17:37.658 --rc genhtml_function_coverage=1 00:17:37.658 --rc genhtml_legend=1 00:17:37.658 --rc geninfo_all_blocks=1 00:17:37.658 --rc geninfo_unexecuted_blocks=1 00:17:37.658 00:17:37.658 ' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:37.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.658 --rc genhtml_branch_coverage=1 00:17:37.658 --rc genhtml_function_coverage=1 00:17:37.658 --rc genhtml_legend=1 00:17:37.658 --rc geninfo_all_blocks=1 00:17:37.658 --rc geninfo_unexecuted_blocks=1 00:17:37.658 00:17:37.658 ' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:37.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.658 --rc genhtml_branch_coverage=1 00:17:37.658 --rc genhtml_function_coverage=1 00:17:37.658 --rc genhtml_legend=1 00:17:37.658 --rc geninfo_all_blocks=1 00:17:37.658 --rc geninfo_unexecuted_blocks=1 00:17:37.658 00:17:37.658 ' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:37.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.658 --rc genhtml_branch_coverage=1 00:17:37.658 --rc genhtml_function_coverage=1 00:17:37.658 --rc genhtml_legend=1 00:17:37.658 --rc geninfo_all_blocks=1 00:17:37.658 --rc geninfo_unexecuted_blocks=1 00:17:37.658 00:17:37.658 ' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.658 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=53c9fc09-dbc1-4b00-8e84-e5b1664be995 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f93bfc34-7985-440d-8bb4-da8f430fd6ab 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0d047a68-64fd-4edd-9c97-20980ddd8221 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.659 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.190 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:40.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:17:40.191 00:17:40.191 --- 10.0.0.2 ping statistics --- 00:17:40.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.191 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:17:40.191 00:17:40.191 --- 10.0.0.1 ping statistics --- 00:17:40.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.191 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.191 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2462609 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2462609 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2462609 ']' 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.192 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:40.192 [2024-11-28 18:19:51.938880] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:40.192 [2024-11-28 18:19:51.939036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.192 [2024-11-28 18:19:52.105297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.192 [2024-11-28 18:19:52.240855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.192 [2024-11-28 18:19:52.240949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.192 [2024-11-28 18:19:52.240975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.192 [2024-11-28 18:19:52.240999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.192 [2024-11-28 18:19:52.241020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.192 [2024-11-28 18:19:52.242683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.127 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:41.127 [2024-11-28 18:19:53.188514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.127 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:41.127 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:41.127 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:41.726 Malloc1 00:17:41.726 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:41.984 Malloc2 00:17:41.984 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:42.242 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:42.500 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.758 [2024-11-28 18:19:54.675198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0d047a68-64fd-4edd-9c97-20980ddd8221 -a 10.0.0.2 -s 4420 -i 4 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:42.758 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.289 [ 0]:0x1 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caa4b1c136b4277801cfb9ff7fa8bef 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caa4b1c136b4277801cfb9ff7fa8bef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.289 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:45.289 [ 0]:0x1 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caa4b1c136b4277801cfb9ff7fa8bef 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caa4b1c136b4277801cfb9ff7fa8bef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:45.289 [ 1]:0x2 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:45.289 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:45.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.548 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.807 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:46.066 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:46.066 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0d047a68-64fd-4edd-9c97-20980ddd8221 -a 10.0.0.2 -s 4420 -i 4 00:17:46.325 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:46.325 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:46.325 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.325 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:46.325 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:46.325 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:48.228 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:48.486 [ 0]:0x2 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.486 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:48.745 [ 0]:0x1 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caa4b1c136b4277801cfb9ff7fa8bef 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caa4b1c136b4277801cfb9ff7fa8bef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:48.745 [ 1]:0x2 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:48.745 18:20:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.003 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:49.261 [ 0]:0x2 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.261 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:49.520 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:49.520 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0d047a68-64fd-4edd-9c97-20980ddd8221 -a 10.0.0.2 -s 4420 -i 4 00:17:49.778 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:49.778 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:49.778 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.778 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:49.778 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:49.778 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:51.679 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:51.938 [ 0]:0x1 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8caa4b1c136b4277801cfb9ff7fa8bef 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8caa4b1c136b4277801cfb9ff7fa8bef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:51.938 [ 1]:0x2 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.938 18:20:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:52.269 [ 0]:0x2 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:52.269 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:52.554 [2024-11-28 18:20:04.528191] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:52.554 request: 00:17:52.554 { 00:17:52.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.554 "nsid": 2, 00:17:52.554 "host": "nqn.2016-06.io.spdk:host1", 00:17:52.554 "method": "nvmf_ns_remove_host", 00:17:52.554 "req_id": 1 00:17:52.554 } 00:17:52.554 Got JSON-RPC error response 00:17:52.554 response: 00:17:52.554 { 00:17:52.554 "code": -32602, 00:17:52.554 "message": "Invalid parameters" 00:17:52.554 } 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.554 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:52.555 [ 0]:0x2 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=924a95222c6246718f34274b067e5379 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 924a95222c6246718f34274b067e5379 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:52.555 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2464246 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2464246 /var/tmp/host.sock 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2464246 ']' 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:52.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.813 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:52.813 [2024-11-28 18:20:04.786046] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:52.813 [2024-11-28 18:20:04.786217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464246 ] 00:17:53.071 [2024-11-28 18:20:04.929174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.071 [2024-11-28 18:20:05.066061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.006 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.006 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:54.006 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.265 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:54.523 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 53c9fc09-dbc1-4b00-8e84-e5b1664be995 00:17:54.523 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:54.523 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 53C9FC09DBC14B008E84E5B1664BE995 -i 00:17:54.782 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f93bfc34-7985-440d-8bb4-da8f430fd6ab 00:17:54.782 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:54.782 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F93BFC347985440D8BB4DA8F430FD6AB -i 00:17:55.041 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:55.300 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:55.866 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:55.866 18:20:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:56.125 nvme0n1 00:17:56.125 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:56.125 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:56.692 nvme1n2 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:56.692 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:57.259 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 53c9fc09-dbc1-4b00-8e84-e5b1664be995 == \5\3\c\9\f\c\0\9\-\d\b\c\1\-\4\b\0\0\-\8\e\8\4\-\e\5\b\1\6\6\4\b\e\9\9\5 ]] 00:17:57.259 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:57.259 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:57.259 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:57.259 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f93bfc34-7985-440d-8bb4-da8f430fd6ab == \f\9\3\b\f\c\3\4\-\7\9\8\5\-\4\4\0\d\-\8\b\b\4\-\d\a\8\f\4\3\0\f\d\6\a\b ]] 00:17:57.259 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:57.518 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 53c9fc09-dbc1-4b00-8e84-e5b1664be995 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 53C9FC09DBC14B008E84E5B1664BE995 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 53C9FC09DBC14B008E84E5B1664BE995 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:58.085 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 53C9FC09DBC14B008E84E5B1664BE995 00:17:58.085 [2024-11-28 18:20:10.159982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:58.085 [2024-11-28 18:20:10.160060] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:58.085 [2024-11-28 18:20:10.160134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.085 request: 00:17:58.085 { 00:17:58.085 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.085 "namespace": { 00:17:58.085 "bdev_name": "invalid", 00:17:58.085 "nsid": 1, 00:17:58.085 "nguid": "53C9FC09DBC14B008E84E5B1664BE995", 00:17:58.085 "no_auto_visible": false, 00:17:58.085 "hide_metadata": false 00:17:58.085 }, 00:17:58.085 "method": "nvmf_subsystem_add_ns", 00:17:58.085 "req_id": 1 00:17:58.085 } 00:17:58.085 Got JSON-RPC error response 00:17:58.085 response: 00:17:58.085 { 00:17:58.085 "code": -32602, 00:17:58.085 "message": "Invalid parameters" 00:17:58.085 } 00:17:58.085 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.085 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.085 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.085 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.086 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 53c9fc09-dbc1-4b00-8e84-e5b1664be995 00:17:58.086 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:58.343 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 53C9FC09DBC14B008E84E5B1664BE995 -i 00:17:58.601 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:00.506 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:00.507 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:00.507 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2464246 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2464246 ']' 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2464246 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2464246 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2464246' 00:18:00.766 killing process with pid 2464246 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2464246 00:18:00.766 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2464246 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.299 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.560 rmmod nvme_tcp 00:18:03.560 rmmod nvme_fabrics 00:18:03.560 rmmod nvme_keyring 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2462609 ']' 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2462609 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2462609 ']' 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2462609 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2462609 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2462609' 00:18:03.560 killing process with pid 2462609 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2462609 00:18:03.560 18:20:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2462609 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.943 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.481 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:07.481 00:18:07.481 real 0m29.530s 00:18:07.481 user 0m43.823s 00:18:07.481 sys 0m4.794s 00:18:07.481 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.481 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:07.481 ************************************ 00:18:07.481 END TEST nvmf_ns_masking 00:18:07.481 ************************************ 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:07.481 ************************************ 00:18:07.481 START TEST nvmf_nvme_cli 00:18:07.481 ************************************ 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:07.481 * Looking for test storage... 00:18:07.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.481 --rc genhtml_branch_coverage=1 00:18:07.481 --rc genhtml_function_coverage=1 00:18:07.481 --rc genhtml_legend=1 00:18:07.481 --rc geninfo_all_blocks=1 00:18:07.481 --rc geninfo_unexecuted_blocks=1 00:18:07.481 00:18:07.481 ' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.481 --rc genhtml_branch_coverage=1 00:18:07.481 --rc genhtml_function_coverage=1 00:18:07.481 --rc genhtml_legend=1 00:18:07.481 --rc geninfo_all_blocks=1 00:18:07.481 --rc geninfo_unexecuted_blocks=1 00:18:07.481 00:18:07.481 ' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.481 --rc genhtml_branch_coverage=1 00:18:07.481 --rc genhtml_function_coverage=1 00:18:07.481 --rc genhtml_legend=1 00:18:07.481 --rc geninfo_all_blocks=1 00:18:07.481 --rc geninfo_unexecuted_blocks=1 00:18:07.481 00:18:07.481 ' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.481 --rc genhtml_branch_coverage=1 00:18:07.481 --rc genhtml_function_coverage=1 00:18:07.481 --rc genhtml_legend=1 00:18:07.481 --rc geninfo_all_blocks=1 00:18:07.481 --rc geninfo_unexecuted_blocks=1 00:18:07.481 00:18:07.481 ' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.481 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:07.482 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:09.391 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:09.391 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:09.391 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:09.391 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.391 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:09.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:18:09.392 00:18:09.392 --- 10.0.0.2 ping statistics --- 00:18:09.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.392 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:18:09.392 00:18:09.392 --- 10.0.0.1 ping statistics --- 00:18:09.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.392 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2467665 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2467665 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2467665 ']' 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.392 18:20:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 [2024-11-28 18:20:21.345846] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:09.392 [2024-11-28 18:20:21.346013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.652 [2024-11-28 18:20:21.503020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:09.652 [2024-11-28 18:20:21.649734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.652 [2024-11-28 18:20:21.649819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.652 [2024-11-28 18:20:21.649845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.652 [2024-11-28 18:20:21.649868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.652 [2024-11-28 18:20:21.649888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.652 [2024-11-28 18:20:21.652789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.652 [2024-11-28 18:20:21.652860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.652 [2024-11-28 18:20:21.652914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.652 [2024-11-28 18:20:21.652920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.590 [2024-11-28 18:20:22.353331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.590 Malloc0 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.590 Malloc1 00:18:10.590 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.591 [2024-11-28 18:20:22.549851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.591 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:10.849 00:18:10.849 Discovery Log Number of Records 2, Generation counter 2 00:18:10.849 =====Discovery Log Entry 0====== 00:18:10.849 trtype: tcp 00:18:10.849 adrfam: ipv4 00:18:10.849 subtype: current discovery subsystem 00:18:10.849 treq: not required 00:18:10.849 portid: 0 00:18:10.849 trsvcid: 4420 00:18:10.849 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:10.849 traddr: 10.0.0.2 00:18:10.849 eflags: explicit discovery connections, duplicate discovery information 00:18:10.849 sectype: none 00:18:10.849 =====Discovery Log Entry 1====== 00:18:10.849 trtype: tcp 00:18:10.849 adrfam: ipv4 00:18:10.849 subtype: nvme subsystem 00:18:10.849 treq: not required 00:18:10.849 portid: 0 00:18:10.849 trsvcid: 4420 00:18:10.849 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:10.849 traddr: 10.0.0.2 00:18:10.849 eflags: none 00:18:10.849 sectype: none 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:10.849 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.418 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:11.418 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:11.418 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.418 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:11.418 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:11.418 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.321 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:13.578 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:13.579 /dev/nvme0n2 ]] 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:13.579 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.147 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.147 rmmod nvme_tcp 00:18:14.147 rmmod nvme_fabrics 00:18:14.147 rmmod nvme_keyring 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2467665 ']' 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2467665 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2467665 ']' 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2467665 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467665 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467665' 00:18:14.147 killing process with pid 2467665 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2467665 00:18:14.147 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2467665 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.529 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.437 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:17.437 00:18:17.437 real 0m10.483s 00:18:17.437 user 0m22.840s 00:18:17.437 sys 0m2.442s 00:18:17.437 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.437 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.437 ************************************ 00:18:17.437 END TEST nvmf_nvme_cli 00:18:17.437 ************************************ 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.696 ************************************ 00:18:17.696 START TEST nvmf_auth_target 00:18:17.696 ************************************ 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:17.696 * Looking for test storage... 00:18:17.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:17.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.696 --rc genhtml_branch_coverage=1 00:18:17.696 --rc genhtml_function_coverage=1 00:18:17.696 --rc genhtml_legend=1 00:18:17.696 --rc geninfo_all_blocks=1 00:18:17.696 --rc geninfo_unexecuted_blocks=1 00:18:17.696 00:18:17.696 ' 00:18:17.696 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:17.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.697 --rc genhtml_branch_coverage=1 00:18:17.697 --rc genhtml_function_coverage=1 00:18:17.697 --rc genhtml_legend=1 00:18:17.697 --rc geninfo_all_blocks=1 00:18:17.697 --rc geninfo_unexecuted_blocks=1 00:18:17.697 00:18:17.697 ' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:17.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.697 --rc genhtml_branch_coverage=1 00:18:17.697 --rc genhtml_function_coverage=1 00:18:17.697 --rc genhtml_legend=1 00:18:17.697 --rc geninfo_all_blocks=1 00:18:17.697 --rc geninfo_unexecuted_blocks=1 00:18:17.697 00:18:17.697 ' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:17.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.697 --rc genhtml_branch_coverage=1 00:18:17.697 --rc genhtml_function_coverage=1 00:18:17.697 --rc genhtml_legend=1 00:18:17.697 --rc geninfo_all_blocks=1 00:18:17.697 --rc geninfo_unexecuted_blocks=1 00:18:17.697 00:18:17.697 ' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:17.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:17.697 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:20.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:20.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:20.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.235 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:20.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:20.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:20.236 00:18:20.236 --- 10.0.0.2 ping statistics --- 00:18:20.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.236 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:20.236 00:18:20.236 --- 10.0.0.1 ping statistics --- 00:18:20.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.236 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2470320 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2470320 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2470320 ']' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.236 18:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2470473 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:21.171 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d020bb6d730cb155bf95ae1c030a15ad4c6a329b3ad66dc9 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xcL 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d020bb6d730cb155bf95ae1c030a15ad4c6a329b3ad66dc9 0 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d020bb6d730cb155bf95ae1c030a15ad4c6a329b3ad66dc9 0 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d020bb6d730cb155bf95ae1c030a15ad4c6a329b3ad66dc9 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xcL 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xcL 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.xcL 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1ef73bc1a13826210cb82f3c7b19412bd8bc6299f16f24aa57c900fb479ea32 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SAU 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1ef73bc1a13826210cb82f3c7b19412bd8bc6299f16f24aa57c900fb479ea32 3 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1ef73bc1a13826210cb82f3c7b19412bd8bc6299f16f24aa57c900fb479ea32 3 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1ef73bc1a13826210cb82f3c7b19412bd8bc6299f16f24aa57c900fb479ea32 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SAU 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SAU 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.SAU 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f73639fb1ec965bf87383fe61b64215e 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Vpu 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f73639fb1ec965bf87383fe61b64215e 1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f73639fb1ec965bf87383fe61b64215e 1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f73639fb1ec965bf87383fe61b64215e 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Vpu 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Vpu 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Vpu 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d846cbe661cf300e98dace2fe32f2790fdcbcdbfa7f88855 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bK4 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d846cbe661cf300e98dace2fe32f2790fdcbcdbfa7f88855 2 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d846cbe661cf300e98dace2fe32f2790fdcbcdbfa7f88855 2 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d846cbe661cf300e98dace2fe32f2790fdcbcdbfa7f88855 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bK4 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bK4 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.bK4 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:21.172 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cd70c46a500737b8261b74876216924a7eae2a5e4dcf2315 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XhG 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cd70c46a500737b8261b74876216924a7eae2a5e4dcf2315 2 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cd70c46a500737b8261b74876216924a7eae2a5e4dcf2315 2 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cd70c46a500737b8261b74876216924a7eae2a5e4dcf2315 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XhG 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XhG 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.XhG 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=02117c88ab0f886dff99a0ba78bf04a9 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3kT 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 02117c88ab0f886dff99a0ba78bf04a9 1 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 02117c88ab0f886dff99a0ba78bf04a9 1 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=02117c88ab0f886dff99a0ba78bf04a9 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3kT 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3kT 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.3kT 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5ac387c3f8999113e6cffa028f2959820c7c0478b2eb07e4f7354bf9b4f09018 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pTo 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5ac387c3f8999113e6cffa028f2959820c7c0478b2eb07e4f7354bf9b4f09018 3 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5ac387c3f8999113e6cffa028f2959820c7c0478b2eb07e4f7354bf9b4f09018 3 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5ac387c3f8999113e6cffa028f2959820c7c0478b2eb07e4f7354bf9b4f09018 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pTo 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pTo 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.pTo 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2470320 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2470320 ']' 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.432 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2470473 /var/tmp/host.sock 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2470473 ']' 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.690 18:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xcL 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.260 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.519 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.519 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.xcL 00:18:22.519 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.xcL 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.SAU ]] 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAU 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAU 00:18:22.809 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAU 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Vpu 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Vpu 00:18:23.091 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Vpu 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.bK4 ]] 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bK4 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bK4 00:18:23.348 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bK4 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.XhG 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.XhG 00:18:23.606 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.XhG 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.3kT ]] 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3kT 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3kT 00:18:23.864 18:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3kT 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pTo 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.pTo 00:18:24.122 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.pTo 00:18:24.380 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:24.380 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:24.380 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.380 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.380 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.380 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.639 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:24.639 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.639 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:24.639 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:24.639 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.639 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.640 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.898 00:18:24.898 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.898 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.898 18:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.156 { 00:18:25.156 "cntlid": 1, 00:18:25.156 "qid": 0, 00:18:25.156 "state": "enabled", 00:18:25.156 "thread": "nvmf_tgt_poll_group_000", 00:18:25.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:25.156 "listen_address": { 00:18:25.156 "trtype": "TCP", 00:18:25.156 "adrfam": "IPv4", 00:18:25.156 "traddr": "10.0.0.2", 00:18:25.156 "trsvcid": "4420" 00:18:25.156 }, 00:18:25.156 "peer_address": { 00:18:25.156 "trtype": "TCP", 00:18:25.156 "adrfam": "IPv4", 00:18:25.156 "traddr": "10.0.0.1", 00:18:25.156 "trsvcid": "56656" 00:18:25.156 }, 00:18:25.156 "auth": { 00:18:25.156 "state": "completed", 00:18:25.156 "digest": "sha256", 00:18:25.156 "dhgroup": "null" 00:18:25.156 } 00:18:25.156 } 00:18:25.156 ]' 00:18:25.156 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.414 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.674 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:25.675 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.613 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.871 18:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.130 00:18:27.390 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.390 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.391 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.650 { 00:18:27.650 "cntlid": 3, 00:18:27.650 "qid": 0, 00:18:27.650 "state": "enabled", 00:18:27.650 "thread": "nvmf_tgt_poll_group_000", 00:18:27.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:27.650 "listen_address": { 00:18:27.650 "trtype": "TCP", 00:18:27.650 "adrfam": "IPv4", 00:18:27.650 "traddr": "10.0.0.2", 00:18:27.650 "trsvcid": "4420" 00:18:27.650 }, 00:18:27.650 "peer_address": { 00:18:27.650 "trtype": "TCP", 00:18:27.650 "adrfam": "IPv4", 00:18:27.650 "traddr": "10.0.0.1", 00:18:27.650 "trsvcid": "56672" 00:18:27.650 }, 00:18:27.650 "auth": { 00:18:27.650 "state": "completed", 00:18:27.650 "digest": "sha256", 00:18:27.650 "dhgroup": "null" 00:18:27.650 } 00:18:27.650 } 00:18:27.650 ]' 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.650 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.909 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:27.909 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:29.289 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.289 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.549 00:18:29.811 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.811 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.811 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.070 { 00:18:30.070 "cntlid": 5, 00:18:30.070 "qid": 0, 00:18:30.070 "state": "enabled", 00:18:30.070 "thread": "nvmf_tgt_poll_group_000", 00:18:30.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:30.070 "listen_address": { 00:18:30.070 "trtype": "TCP", 00:18:30.070 "adrfam": "IPv4", 00:18:30.070 "traddr": "10.0.0.2", 00:18:30.070 "trsvcid": "4420" 00:18:30.070 }, 00:18:30.070 "peer_address": { 00:18:30.070 "trtype": "TCP", 00:18:30.070 "adrfam": "IPv4", 00:18:30.070 "traddr": "10.0.0.1", 00:18:30.070 "trsvcid": "56688" 00:18:30.070 }, 00:18:30.070 "auth": { 00:18:30.070 "state": "completed", 00:18:30.070 "digest": "sha256", 00:18:30.070 "dhgroup": "null" 00:18:30.070 } 00:18:30.070 } 00:18:30.070 ]' 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.070 18:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.070 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.070 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.070 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.070 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.070 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.328 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:30.328 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:31.266 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.523 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.088 00:18:32.088 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.088 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.088 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.345 { 00:18:32.345 "cntlid": 7, 00:18:32.345 "qid": 0, 00:18:32.345 "state": "enabled", 00:18:32.345 "thread": "nvmf_tgt_poll_group_000", 00:18:32.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:32.345 "listen_address": { 00:18:32.345 "trtype": "TCP", 00:18:32.345 "adrfam": "IPv4", 00:18:32.345 "traddr": "10.0.0.2", 00:18:32.345 "trsvcid": "4420" 00:18:32.345 }, 00:18:32.345 "peer_address": { 00:18:32.345 "trtype": "TCP", 00:18:32.345 "adrfam": "IPv4", 00:18:32.345 "traddr": "10.0.0.1", 00:18:32.345 "trsvcid": "56718" 00:18:32.345 }, 00:18:32.345 "auth": { 00:18:32.345 "state": "completed", 00:18:32.345 "digest": "sha256", 00:18:32.345 "dhgroup": "null" 00:18:32.345 } 00:18:32.345 } 00:18:32.345 ]' 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.345 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.346 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.346 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:32.346 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.346 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.346 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.346 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.603 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:18:32.603 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:18:33.538 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.538 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.538 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.538 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.796 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.796 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.796 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.796 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:33.796 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.055 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.314 00:18:34.314 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.314 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.314 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.572 { 00:18:34.572 "cntlid": 9, 00:18:34.572 "qid": 0, 00:18:34.572 "state": "enabled", 00:18:34.572 "thread": "nvmf_tgt_poll_group_000", 00:18:34.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:34.572 "listen_address": { 00:18:34.572 "trtype": "TCP", 00:18:34.572 "adrfam": "IPv4", 00:18:34.572 "traddr": "10.0.0.2", 00:18:34.572 "trsvcid": "4420" 00:18:34.572 }, 00:18:34.572 "peer_address": { 00:18:34.572 "trtype": "TCP", 00:18:34.572 "adrfam": "IPv4", 00:18:34.572 "traddr": "10.0.0.1", 00:18:34.572 "trsvcid": "36968" 00:18:34.572 }, 00:18:34.572 "auth": { 00:18:34.572 "state": "completed", 00:18:34.572 "digest": "sha256", 00:18:34.572 "dhgroup": "ffdhe2048" 00:18:34.572 } 00:18:34.572 } 00:18:34.572 ]' 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.572 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.831 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.831 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.831 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.831 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.831 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.090 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:35.090 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:36.030 18:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:36.289 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.290 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.857 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.857 { 00:18:36.857 "cntlid": 11, 00:18:36.857 "qid": 0, 00:18:36.857 "state": "enabled", 00:18:36.857 "thread": "nvmf_tgt_poll_group_000", 00:18:36.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:36.857 "listen_address": { 00:18:36.857 "trtype": "TCP", 00:18:36.857 "adrfam": "IPv4", 00:18:36.857 "traddr": "10.0.0.2", 00:18:36.857 "trsvcid": "4420" 00:18:36.857 }, 00:18:36.857 "peer_address": { 00:18:36.857 "trtype": "TCP", 00:18:36.857 "adrfam": "IPv4", 00:18:36.857 "traddr": "10.0.0.1", 00:18:36.857 "trsvcid": "37006" 00:18:36.857 }, 00:18:36.857 "auth": { 00:18:36.857 "state": "completed", 00:18:36.857 "digest": "sha256", 00:18:36.857 "dhgroup": "ffdhe2048" 00:18:36.857 } 00:18:36.857 } 00:18:36.857 ]' 00:18:36.857 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.115 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.115 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.115 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.115 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.115 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.115 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.115 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.374 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:37.374 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.311 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.569 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.136 00:18:39.136 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.136 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.136 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.136 { 00:18:39.136 "cntlid": 13, 00:18:39.136 "qid": 0, 00:18:39.136 "state": "enabled", 00:18:39.136 "thread": "nvmf_tgt_poll_group_000", 00:18:39.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:39.136 "listen_address": { 00:18:39.136 "trtype": "TCP", 00:18:39.136 "adrfam": "IPv4", 00:18:39.136 "traddr": "10.0.0.2", 00:18:39.136 "trsvcid": "4420" 00:18:39.136 }, 00:18:39.136 "peer_address": { 00:18:39.136 "trtype": "TCP", 00:18:39.136 "adrfam": "IPv4", 00:18:39.136 "traddr": "10.0.0.1", 00:18:39.136 "trsvcid": "37034" 00:18:39.136 }, 00:18:39.136 "auth": { 00:18:39.136 "state": "completed", 00:18:39.136 "digest": "sha256", 00:18:39.136 "dhgroup": "ffdhe2048" 00:18:39.136 } 00:18:39.136 } 00:18:39.136 ]' 00:18:39.136 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.394 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.652 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:39.652 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.589 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.847 18:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.419 00:18:41.419 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.419 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.419 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.678 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.678 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.678 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.678 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.678 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.679 { 00:18:41.679 "cntlid": 15, 00:18:41.679 "qid": 0, 00:18:41.679 "state": "enabled", 00:18:41.679 "thread": "nvmf_tgt_poll_group_000", 00:18:41.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:41.679 "listen_address": { 00:18:41.679 "trtype": "TCP", 00:18:41.679 "adrfam": "IPv4", 00:18:41.679 "traddr": "10.0.0.2", 00:18:41.679 "trsvcid": "4420" 00:18:41.679 }, 00:18:41.679 "peer_address": { 00:18:41.679 "trtype": "TCP", 00:18:41.679 "adrfam": "IPv4", 00:18:41.679 "traddr": "10.0.0.1", 00:18:41.679 "trsvcid": "37050" 00:18:41.679 }, 00:18:41.679 "auth": { 00:18:41.679 "state": "completed", 00:18:41.679 "digest": "sha256", 00:18:41.679 "dhgroup": "ffdhe2048" 00:18:41.679 } 00:18:41.679 } 00:18:41.679 ]' 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.679 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.936 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:18:41.936 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:42.876 18:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.133 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.698 00:18:43.698 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.698 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.698 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.957 { 00:18:43.957 "cntlid": 17, 00:18:43.957 "qid": 0, 00:18:43.957 "state": "enabled", 00:18:43.957 "thread": "nvmf_tgt_poll_group_000", 00:18:43.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:43.957 "listen_address": { 00:18:43.957 "trtype": "TCP", 00:18:43.957 "adrfam": "IPv4", 00:18:43.957 "traddr": "10.0.0.2", 00:18:43.957 "trsvcid": "4420" 00:18:43.957 }, 00:18:43.957 "peer_address": { 00:18:43.957 "trtype": "TCP", 00:18:43.957 "adrfam": "IPv4", 00:18:43.957 "traddr": "10.0.0.1", 00:18:43.957 "trsvcid": "53890" 00:18:43.957 }, 00:18:43.957 "auth": { 00:18:43.957 "state": "completed", 00:18:43.957 "digest": "sha256", 00:18:43.957 "dhgroup": "ffdhe3072" 00:18:43.957 } 00:18:43.957 } 00:18:43.957 ]' 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.957 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.216 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:44.216 18:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.156 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.416 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.417 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.417 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.417 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.417 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.417 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.985 00:18:45.985 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.985 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.985 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.243 { 00:18:46.243 "cntlid": 19, 00:18:46.243 "qid": 0, 00:18:46.243 "state": "enabled", 00:18:46.243 "thread": "nvmf_tgt_poll_group_000", 00:18:46.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:46.243 "listen_address": { 00:18:46.243 "trtype": "TCP", 00:18:46.243 "adrfam": "IPv4", 00:18:46.243 "traddr": "10.0.0.2", 00:18:46.243 "trsvcid": "4420" 00:18:46.243 }, 00:18:46.243 "peer_address": { 00:18:46.243 "trtype": "TCP", 00:18:46.243 "adrfam": "IPv4", 00:18:46.243 "traddr": "10.0.0.1", 00:18:46.243 "trsvcid": "53910" 00:18:46.243 }, 00:18:46.243 "auth": { 00:18:46.243 "state": "completed", 00:18:46.243 "digest": "sha256", 00:18:46.243 "dhgroup": "ffdhe3072" 00:18:46.243 } 00:18:46.243 } 00:18:46.243 ]' 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.243 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.501 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:46.501 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:47.497 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.497 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.497 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.497 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.498 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.498 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.498 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.756 18:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.323 00:18:48.323 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.323 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.323 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.581 { 00:18:48.581 "cntlid": 21, 00:18:48.581 "qid": 0, 00:18:48.581 "state": "enabled", 00:18:48.581 "thread": "nvmf_tgt_poll_group_000", 00:18:48.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:48.581 "listen_address": { 00:18:48.581 "trtype": "TCP", 00:18:48.581 "adrfam": "IPv4", 00:18:48.581 "traddr": "10.0.0.2", 00:18:48.581 "trsvcid": "4420" 00:18:48.581 }, 00:18:48.581 "peer_address": { 00:18:48.581 "trtype": "TCP", 00:18:48.581 "adrfam": "IPv4", 00:18:48.581 "traddr": "10.0.0.1", 00:18:48.581 "trsvcid": "53938" 00:18:48.581 }, 00:18:48.581 "auth": { 00:18:48.581 "state": "completed", 00:18:48.581 "digest": "sha256", 00:18:48.581 "dhgroup": "ffdhe3072" 00:18:48.581 } 00:18:48.581 } 00:18:48.581 ]' 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.581 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.841 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:48.841 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.776 18:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.345 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.604 00:18:50.604 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.604 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.604 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.863 { 00:18:50.863 "cntlid": 23, 00:18:50.863 "qid": 0, 00:18:50.863 "state": "enabled", 00:18:50.863 "thread": "nvmf_tgt_poll_group_000", 00:18:50.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:50.863 "listen_address": { 00:18:50.863 "trtype": "TCP", 00:18:50.863 "adrfam": "IPv4", 00:18:50.863 "traddr": "10.0.0.2", 00:18:50.863 "trsvcid": "4420" 00:18:50.863 }, 00:18:50.863 "peer_address": { 00:18:50.863 "trtype": "TCP", 00:18:50.863 "adrfam": "IPv4", 00:18:50.863 "traddr": "10.0.0.1", 00:18:50.863 "trsvcid": "53954" 00:18:50.863 }, 00:18:50.863 "auth": { 00:18:50.863 "state": "completed", 00:18:50.863 "digest": "sha256", 00:18:50.863 "dhgroup": "ffdhe3072" 00:18:50.863 } 00:18:50.863 } 00:18:50.863 ]' 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.863 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.430 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:18:51.430 18:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.368 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.626 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:52.626 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.626 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:52.626 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:52.626 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.626 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.627 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.885 00:18:52.885 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.885 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.885 18:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.143 { 00:18:53.143 "cntlid": 25, 00:18:53.143 "qid": 0, 00:18:53.143 "state": "enabled", 00:18:53.143 "thread": "nvmf_tgt_poll_group_000", 00:18:53.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:53.143 "listen_address": { 00:18:53.143 "trtype": "TCP", 00:18:53.143 "adrfam": "IPv4", 00:18:53.143 "traddr": "10.0.0.2", 00:18:53.143 "trsvcid": "4420" 00:18:53.143 }, 00:18:53.143 "peer_address": { 00:18:53.143 "trtype": "TCP", 00:18:53.143 "adrfam": "IPv4", 00:18:53.143 "traddr": "10.0.0.1", 00:18:53.143 "trsvcid": "47960" 00:18:53.143 }, 00:18:53.143 "auth": { 00:18:53.143 "state": "completed", 00:18:53.143 "digest": "sha256", 00:18:53.143 "dhgroup": "ffdhe4096" 00:18:53.143 } 00:18:53.143 } 00:18:53.143 ]' 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.143 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.400 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.400 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.400 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.400 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.400 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.659 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:53.659 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.598 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.857 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.424 00:18:55.424 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.424 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.424 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.683 { 00:18:55.683 "cntlid": 27, 00:18:55.683 "qid": 0, 00:18:55.683 "state": "enabled", 00:18:55.683 "thread": "nvmf_tgt_poll_group_000", 00:18:55.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:55.683 "listen_address": { 00:18:55.683 "trtype": "TCP", 00:18:55.683 "adrfam": "IPv4", 00:18:55.683 "traddr": "10.0.0.2", 00:18:55.683 "trsvcid": "4420" 00:18:55.683 }, 00:18:55.683 "peer_address": { 00:18:55.683 "trtype": "TCP", 00:18:55.683 "adrfam": "IPv4", 00:18:55.683 "traddr": "10.0.0.1", 00:18:55.683 "trsvcid": "47990" 00:18:55.683 }, 00:18:55.683 "auth": { 00:18:55.683 "state": "completed", 00:18:55.683 "digest": "sha256", 00:18:55.683 "dhgroup": "ffdhe4096" 00:18:55.683 } 00:18:55.683 } 00:18:55.683 ]' 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.683 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.941 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:55.941 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:18:57.313 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.313 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.880 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.880 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.138 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.138 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.138 { 00:18:58.138 "cntlid": 29, 00:18:58.138 "qid": 0, 00:18:58.138 "state": "enabled", 00:18:58.138 "thread": "nvmf_tgt_poll_group_000", 00:18:58.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:18:58.138 "listen_address": { 00:18:58.138 "trtype": "TCP", 00:18:58.138 "adrfam": "IPv4", 00:18:58.138 "traddr": "10.0.0.2", 00:18:58.138 "trsvcid": "4420" 00:18:58.138 }, 00:18:58.138 "peer_address": { 00:18:58.138 "trtype": "TCP", 00:18:58.138 "adrfam": "IPv4", 00:18:58.138 "traddr": "10.0.0.1", 00:18:58.138 "trsvcid": "48022" 00:18:58.138 }, 00:18:58.138 "auth": { 00:18:58.138 "state": "completed", 00:18:58.138 "digest": "sha256", 00:18:58.138 "dhgroup": "ffdhe4096" 00:18:58.138 } 00:18:58.138 } 00:18:58.138 ]' 00:18:58.138 18:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.138 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.397 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:58.397 18:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.336 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:59.594 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.158 00:19:00.159 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.159 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.159 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.417 { 00:19:00.417 "cntlid": 31, 00:19:00.417 "qid": 0, 00:19:00.417 "state": "enabled", 00:19:00.417 "thread": "nvmf_tgt_poll_group_000", 00:19:00.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:00.417 "listen_address": { 00:19:00.417 "trtype": "TCP", 00:19:00.417 "adrfam": "IPv4", 00:19:00.417 "traddr": "10.0.0.2", 00:19:00.417 "trsvcid": "4420" 00:19:00.417 }, 00:19:00.417 "peer_address": { 00:19:00.417 "trtype": "TCP", 00:19:00.417 "adrfam": "IPv4", 00:19:00.417 "traddr": "10.0.0.1", 00:19:00.417 "trsvcid": "48068" 00:19:00.417 }, 00:19:00.417 "auth": { 00:19:00.417 "state": "completed", 00:19:00.417 "digest": "sha256", 00:19:00.417 "dhgroup": "ffdhe4096" 00:19:00.417 } 00:19:00.417 } 00:19:00.417 ]' 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.417 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.677 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:00.677 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.617 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.875 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.812 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.812 { 00:19:02.812 "cntlid": 33, 00:19:02.812 "qid": 0, 00:19:02.812 "state": "enabled", 00:19:02.812 "thread": "nvmf_tgt_poll_group_000", 00:19:02.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:02.812 "listen_address": { 00:19:02.812 "trtype": "TCP", 00:19:02.812 "adrfam": "IPv4", 00:19:02.812 "traddr": "10.0.0.2", 00:19:02.812 "trsvcid": "4420" 00:19:02.812 }, 00:19:02.812 "peer_address": { 00:19:02.812 "trtype": "TCP", 00:19:02.812 "adrfam": "IPv4", 00:19:02.812 "traddr": "10.0.0.1", 00:19:02.812 "trsvcid": "48102" 00:19:02.812 }, 00:19:02.812 "auth": { 00:19:02.812 "state": "completed", 00:19:02.812 "digest": "sha256", 00:19:02.812 "dhgroup": "ffdhe6144" 00:19:02.812 } 00:19:02.812 } 00:19:02.812 ]' 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.812 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.085 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.085 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.085 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.085 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.085 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.344 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:03.344 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.279 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.538 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.104 00:19:05.104 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.104 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.104 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.362 { 00:19:05.362 "cntlid": 35, 00:19:05.362 "qid": 0, 00:19:05.362 "state": "enabled", 00:19:05.362 "thread": "nvmf_tgt_poll_group_000", 00:19:05.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:05.362 "listen_address": { 00:19:05.362 "trtype": "TCP", 00:19:05.362 "adrfam": "IPv4", 00:19:05.362 "traddr": "10.0.0.2", 00:19:05.362 "trsvcid": "4420" 00:19:05.362 }, 00:19:05.362 "peer_address": { 00:19:05.362 "trtype": "TCP", 00:19:05.362 "adrfam": "IPv4", 00:19:05.362 "traddr": "10.0.0.1", 00:19:05.362 "trsvcid": "50828" 00:19:05.362 }, 00:19:05.362 "auth": { 00:19:05.362 "state": "completed", 00:19:05.362 "digest": "sha256", 00:19:05.362 "dhgroup": "ffdhe6144" 00:19:05.362 } 00:19:05.362 } 00:19:05.362 ]' 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.362 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.621 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.621 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.621 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.621 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.879 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:05.879 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:06.815 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.074 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.641 00:19:07.641 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.641 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.641 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.900 { 00:19:07.900 "cntlid": 37, 00:19:07.900 "qid": 0, 00:19:07.900 "state": "enabled", 00:19:07.900 "thread": "nvmf_tgt_poll_group_000", 00:19:07.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:07.900 "listen_address": { 00:19:07.900 "trtype": "TCP", 00:19:07.900 "adrfam": "IPv4", 00:19:07.900 "traddr": "10.0.0.2", 00:19:07.900 "trsvcid": "4420" 00:19:07.900 }, 00:19:07.900 "peer_address": { 00:19:07.900 "trtype": "TCP", 00:19:07.900 "adrfam": "IPv4", 00:19:07.900 "traddr": "10.0.0.1", 00:19:07.900 "trsvcid": "50870" 00:19:07.900 }, 00:19:07.900 "auth": { 00:19:07.900 "state": "completed", 00:19:07.900 "digest": "sha256", 00:19:07.900 "dhgroup": "ffdhe6144" 00:19:07.900 } 00:19:07.900 } 00:19:07.900 ]' 00:19:07.900 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.158 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.418 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:08.418 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.357 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:09.924 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:09.924 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.924 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.924 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.924 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.925 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.494 00:19:10.494 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.494 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.494 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.494 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.494 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.753 { 00:19:10.753 "cntlid": 39, 00:19:10.753 "qid": 0, 00:19:10.753 "state": "enabled", 00:19:10.753 "thread": "nvmf_tgt_poll_group_000", 00:19:10.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:10.753 "listen_address": { 00:19:10.753 "trtype": "TCP", 00:19:10.753 "adrfam": "IPv4", 00:19:10.753 "traddr": "10.0.0.2", 00:19:10.753 "trsvcid": "4420" 00:19:10.753 }, 00:19:10.753 "peer_address": { 00:19:10.753 "trtype": "TCP", 00:19:10.753 "adrfam": "IPv4", 00:19:10.753 "traddr": "10.0.0.1", 00:19:10.753 "trsvcid": "50912" 00:19:10.753 }, 00:19:10.753 "auth": { 00:19:10.753 "state": "completed", 00:19:10.753 "digest": "sha256", 00:19:10.753 "dhgroup": "ffdhe6144" 00:19:10.753 } 00:19:10.753 } 00:19:10.753 ]' 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.753 18:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.012 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:11.012 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:11.948 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.948 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.948 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.948 18:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.948 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.948 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.948 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.948 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:11.948 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:12.206 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:12.206 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.206 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.206 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:12.206 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.206 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.207 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.187 00:19:13.187 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.187 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.187 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.452 { 00:19:13.452 "cntlid": 41, 00:19:13.452 "qid": 0, 00:19:13.452 "state": "enabled", 00:19:13.452 "thread": "nvmf_tgt_poll_group_000", 00:19:13.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:13.452 "listen_address": { 00:19:13.452 "trtype": "TCP", 00:19:13.452 "adrfam": "IPv4", 00:19:13.452 "traddr": "10.0.0.2", 00:19:13.452 "trsvcid": "4420" 00:19:13.452 }, 00:19:13.452 "peer_address": { 00:19:13.452 "trtype": "TCP", 00:19:13.452 "adrfam": "IPv4", 00:19:13.452 "traddr": "10.0.0.1", 00:19:13.452 "trsvcid": "50930" 00:19:13.452 }, 00:19:13.452 "auth": { 00:19:13.452 "state": "completed", 00:19:13.452 "digest": "sha256", 00:19:13.452 "dhgroup": "ffdhe8192" 00:19:13.452 } 00:19:13.452 } 00:19:13.452 ]' 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.452 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.453 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.453 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.711 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.711 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.711 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.969 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:13.969 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:14.904 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.162 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.101 00:19:16.101 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.101 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.101 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.360 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.360 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.360 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.360 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.360 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.360 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.360 { 00:19:16.360 "cntlid": 43, 00:19:16.360 "qid": 0, 00:19:16.360 "state": "enabled", 00:19:16.360 "thread": "nvmf_tgt_poll_group_000", 00:19:16.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:16.360 "listen_address": { 00:19:16.360 "trtype": "TCP", 00:19:16.360 "adrfam": "IPv4", 00:19:16.360 "traddr": "10.0.0.2", 00:19:16.360 "trsvcid": "4420" 00:19:16.360 }, 00:19:16.360 "peer_address": { 00:19:16.360 "trtype": "TCP", 00:19:16.360 "adrfam": "IPv4", 00:19:16.360 "traddr": "10.0.0.1", 00:19:16.361 "trsvcid": "32822" 00:19:16.361 }, 00:19:16.361 "auth": { 00:19:16.361 "state": "completed", 00:19:16.361 "digest": "sha256", 00:19:16.361 "dhgroup": "ffdhe8192" 00:19:16.361 } 00:19:16.361 } 00:19:16.361 ]' 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.361 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.930 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:16.930 18:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:17.866 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.125 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.062 00:19:19.062 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.062 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.062 18:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.321 { 00:19:19.321 "cntlid": 45, 00:19:19.321 "qid": 0, 00:19:19.321 "state": "enabled", 00:19:19.321 "thread": "nvmf_tgt_poll_group_000", 00:19:19.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:19.321 "listen_address": { 00:19:19.321 "trtype": "TCP", 00:19:19.321 "adrfam": "IPv4", 00:19:19.321 "traddr": "10.0.0.2", 00:19:19.321 "trsvcid": "4420" 00:19:19.321 }, 00:19:19.321 "peer_address": { 00:19:19.321 "trtype": "TCP", 00:19:19.321 "adrfam": "IPv4", 00:19:19.321 "traddr": "10.0.0.1", 00:19:19.321 "trsvcid": "32848" 00:19:19.321 }, 00:19:19.321 "auth": { 00:19:19.321 "state": "completed", 00:19:19.321 "digest": "sha256", 00:19:19.321 "dhgroup": "ffdhe8192" 00:19:19.321 } 00:19:19.321 } 00:19:19.321 ]' 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.321 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.322 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.580 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:19.580 18:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.515 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.773 18:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.707 00:19:21.707 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.707 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.707 18:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.966 { 00:19:21.966 "cntlid": 47, 00:19:21.966 "qid": 0, 00:19:21.966 "state": "enabled", 00:19:21.966 "thread": "nvmf_tgt_poll_group_000", 00:19:21.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:21.966 "listen_address": { 00:19:21.966 "trtype": "TCP", 00:19:21.966 "adrfam": "IPv4", 00:19:21.966 "traddr": "10.0.0.2", 00:19:21.966 "trsvcid": "4420" 00:19:21.966 }, 00:19:21.966 "peer_address": { 00:19:21.966 "trtype": "TCP", 00:19:21.966 "adrfam": "IPv4", 00:19:21.966 "traddr": "10.0.0.1", 00:19:21.966 "trsvcid": "32874" 00:19:21.966 }, 00:19:21.966 "auth": { 00:19:21.966 "state": "completed", 00:19:21.966 "digest": "sha256", 00:19:21.966 "dhgroup": "ffdhe8192" 00:19:21.966 } 00:19:21.966 } 00:19:21.966 ]' 00:19:21.966 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.225 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.225 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.225 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.226 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.226 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.226 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.226 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.484 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:22.484 18:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.418 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.676 18:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.243 00:19:24.243 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.243 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.243 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.502 { 00:19:24.502 "cntlid": 49, 00:19:24.502 "qid": 0, 00:19:24.502 "state": "enabled", 00:19:24.502 "thread": "nvmf_tgt_poll_group_000", 00:19:24.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:24.502 "listen_address": { 00:19:24.502 "trtype": "TCP", 00:19:24.502 "adrfam": "IPv4", 00:19:24.502 "traddr": "10.0.0.2", 00:19:24.502 "trsvcid": "4420" 00:19:24.502 }, 00:19:24.502 "peer_address": { 00:19:24.502 "trtype": "TCP", 00:19:24.502 "adrfam": "IPv4", 00:19:24.502 "traddr": "10.0.0.1", 00:19:24.502 "trsvcid": "37404" 00:19:24.502 }, 00:19:24.502 "auth": { 00:19:24.502 "state": "completed", 00:19:24.502 "digest": "sha384", 00:19:24.502 "dhgroup": "null" 00:19:24.502 } 00:19:24.502 } 00:19:24.502 ]' 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.502 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.761 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:24.761 18:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:25.700 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.959 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.526 00:19:26.526 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.526 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.526 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.785 { 00:19:26.785 "cntlid": 51, 00:19:26.785 "qid": 0, 00:19:26.785 "state": "enabled", 00:19:26.785 "thread": "nvmf_tgt_poll_group_000", 00:19:26.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:26.785 "listen_address": { 00:19:26.785 "trtype": "TCP", 00:19:26.785 "adrfam": "IPv4", 00:19:26.785 "traddr": "10.0.0.2", 00:19:26.785 "trsvcid": "4420" 00:19:26.785 }, 00:19:26.785 "peer_address": { 00:19:26.785 "trtype": "TCP", 00:19:26.785 "adrfam": "IPv4", 00:19:26.785 "traddr": "10.0.0.1", 00:19:26.785 "trsvcid": "37436" 00:19:26.785 }, 00:19:26.785 "auth": { 00:19:26.785 "state": "completed", 00:19:26.785 "digest": "sha384", 00:19:26.785 "dhgroup": "null" 00:19:26.785 } 00:19:26.785 } 00:19:26.785 ]' 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.785 18:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.044 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:27.044 18:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.424 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.993 00:19:28.993 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.993 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.993 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.252 { 00:19:29.252 "cntlid": 53, 00:19:29.252 "qid": 0, 00:19:29.252 "state": "enabled", 00:19:29.252 "thread": "nvmf_tgt_poll_group_000", 00:19:29.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:29.252 "listen_address": { 00:19:29.252 "trtype": "TCP", 00:19:29.252 "adrfam": "IPv4", 00:19:29.252 "traddr": "10.0.0.2", 00:19:29.252 "trsvcid": "4420" 00:19:29.252 }, 00:19:29.252 "peer_address": { 00:19:29.252 "trtype": "TCP", 00:19:29.252 "adrfam": "IPv4", 00:19:29.252 "traddr": "10.0.0.1", 00:19:29.252 "trsvcid": "37458" 00:19:29.252 }, 00:19:29.252 "auth": { 00:19:29.252 "state": "completed", 00:19:29.252 "digest": "sha384", 00:19:29.252 "dhgroup": "null" 00:19:29.252 } 00:19:29.252 } 00:19:29.252 ]' 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.252 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.510 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:29.510 18:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.446 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.705 18:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.274 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.274 { 00:19:31.274 "cntlid": 55, 00:19:31.274 "qid": 0, 00:19:31.274 "state": "enabled", 00:19:31.274 "thread": "nvmf_tgt_poll_group_000", 00:19:31.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:31.274 "listen_address": { 00:19:31.274 "trtype": "TCP", 00:19:31.274 "adrfam": "IPv4", 00:19:31.274 "traddr": "10.0.0.2", 00:19:31.274 "trsvcid": "4420" 00:19:31.274 }, 00:19:31.274 "peer_address": { 00:19:31.274 "trtype": "TCP", 00:19:31.274 "adrfam": "IPv4", 00:19:31.274 "traddr": "10.0.0.1", 00:19:31.274 "trsvcid": "37484" 00:19:31.274 }, 00:19:31.274 "auth": { 00:19:31.274 "state": "completed", 00:19:31.274 "digest": "sha384", 00:19:31.274 "dhgroup": "null" 00:19:31.274 } 00:19:31.274 } 00:19:31.274 ]' 00:19:31.274 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.532 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.791 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:31.791 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:32.730 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.988 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.555 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.555 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.812 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.812 { 00:19:33.812 "cntlid": 57, 00:19:33.812 "qid": 0, 00:19:33.812 "state": "enabled", 00:19:33.813 "thread": "nvmf_tgt_poll_group_000", 00:19:33.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:33.813 "listen_address": { 00:19:33.813 "trtype": "TCP", 00:19:33.813 "adrfam": "IPv4", 00:19:33.813 "traddr": "10.0.0.2", 00:19:33.813 "trsvcid": "4420" 00:19:33.813 }, 00:19:33.813 "peer_address": { 00:19:33.813 "trtype": "TCP", 00:19:33.813 "adrfam": "IPv4", 00:19:33.813 "traddr": "10.0.0.1", 00:19:33.813 "trsvcid": "37784" 00:19:33.813 }, 00:19:33.813 "auth": { 00:19:33.813 "state": "completed", 00:19:33.813 "digest": "sha384", 00:19:33.813 "dhgroup": "ffdhe2048" 00:19:33.813 } 00:19:33.813 } 00:19:33.813 ]' 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.813 18:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.070 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:34.070 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:35.003 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.260 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.516 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.516 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.516 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.516 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.773 00:19:35.773 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.773 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.773 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.030 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.031 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.031 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.031 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.031 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.031 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.031 { 00:19:36.031 "cntlid": 59, 00:19:36.031 "qid": 0, 00:19:36.031 "state": "enabled", 00:19:36.031 "thread": "nvmf_tgt_poll_group_000", 00:19:36.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:36.031 "listen_address": { 00:19:36.031 "trtype": "TCP", 00:19:36.031 "adrfam": "IPv4", 00:19:36.031 "traddr": "10.0.0.2", 00:19:36.031 "trsvcid": "4420" 00:19:36.031 }, 00:19:36.031 "peer_address": { 00:19:36.031 "trtype": "TCP", 00:19:36.031 "adrfam": "IPv4", 00:19:36.031 "traddr": "10.0.0.1", 00:19:36.031 "trsvcid": "37822" 00:19:36.031 }, 00:19:36.031 "auth": { 00:19:36.031 "state": "completed", 00:19:36.031 "digest": "sha384", 00:19:36.031 "dhgroup": "ffdhe2048" 00:19:36.031 } 00:19:36.031 } 00:19:36.031 ]' 00:19:36.031 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.031 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.288 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:36.288 18:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.664 18:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.232 00:19:38.232 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.232 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.232 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.561 { 00:19:38.561 "cntlid": 61, 00:19:38.561 "qid": 0, 00:19:38.561 "state": "enabled", 00:19:38.561 "thread": "nvmf_tgt_poll_group_000", 00:19:38.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.561 "listen_address": { 00:19:38.561 "trtype": "TCP", 00:19:38.561 "adrfam": "IPv4", 00:19:38.561 "traddr": "10.0.0.2", 00:19:38.561 "trsvcid": "4420" 00:19:38.561 }, 00:19:38.561 "peer_address": { 00:19:38.561 "trtype": "TCP", 00:19:38.561 "adrfam": "IPv4", 00:19:38.561 "traddr": "10.0.0.1", 00:19:38.561 "trsvcid": "37852" 00:19:38.561 }, 00:19:38.561 "auth": { 00:19:38.561 "state": "completed", 00:19:38.561 "digest": "sha384", 00:19:38.561 "dhgroup": "ffdhe2048" 00:19:38.561 } 00:19:38.561 } 00:19:38.561 ]' 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.561 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.847 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:38.847 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.781 18:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.040 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:40.299 00:19:40.299 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.299 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.299 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.557 { 00:19:40.557 "cntlid": 63, 00:19:40.557 "qid": 0, 00:19:40.557 "state": "enabled", 00:19:40.557 "thread": "nvmf_tgt_poll_group_000", 00:19:40.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.557 "listen_address": { 00:19:40.557 "trtype": "TCP", 00:19:40.557 "adrfam": "IPv4", 00:19:40.557 "traddr": "10.0.0.2", 00:19:40.557 "trsvcid": "4420" 00:19:40.557 }, 00:19:40.557 "peer_address": { 00:19:40.557 "trtype": "TCP", 00:19:40.557 "adrfam": "IPv4", 00:19:40.557 "traddr": "10.0.0.1", 00:19:40.557 "trsvcid": "37882" 00:19:40.557 }, 00:19:40.557 "auth": { 00:19:40.557 "state": "completed", 00:19:40.557 "digest": "sha384", 00:19:40.557 "dhgroup": "ffdhe2048" 00:19:40.557 } 00:19:40.557 } 00:19:40.557 ]' 00:19:40.557 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.815 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.073 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:41.073 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:42.007 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.007 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.007 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.007 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.007 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.007 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.007 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.007 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.007 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.266 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.832 00:19:42.832 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.832 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.832 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.091 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.091 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.091 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.091 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.091 { 00:19:43.091 "cntlid": 65, 00:19:43.091 "qid": 0, 00:19:43.091 "state": "enabled", 00:19:43.091 "thread": "nvmf_tgt_poll_group_000", 00:19:43.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:43.091 "listen_address": { 00:19:43.091 "trtype": "TCP", 00:19:43.091 "adrfam": "IPv4", 00:19:43.091 "traddr": "10.0.0.2", 00:19:43.091 "trsvcid": "4420" 00:19:43.091 }, 00:19:43.091 "peer_address": { 00:19:43.091 "trtype": "TCP", 00:19:43.091 "adrfam": "IPv4", 00:19:43.091 "traddr": "10.0.0.1", 00:19:43.091 "trsvcid": "37912" 00:19:43.091 }, 00:19:43.091 "auth": { 00:19:43.091 "state": "completed", 00:19:43.091 "digest": "sha384", 00:19:43.091 "dhgroup": "ffdhe3072" 00:19:43.091 } 00:19:43.091 } 00:19:43.091 ]' 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.091 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.348 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:43.348 18:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.721 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.288 00:19:45.288 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.288 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.288 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.547 { 00:19:45.547 "cntlid": 67, 00:19:45.547 "qid": 0, 00:19:45.547 "state": "enabled", 00:19:45.547 "thread": "nvmf_tgt_poll_group_000", 00:19:45.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.547 "listen_address": { 00:19:45.547 "trtype": "TCP", 00:19:45.547 "adrfam": "IPv4", 00:19:45.547 "traddr": "10.0.0.2", 00:19:45.547 "trsvcid": "4420" 00:19:45.547 }, 00:19:45.547 "peer_address": { 00:19:45.547 "trtype": "TCP", 00:19:45.547 "adrfam": "IPv4", 00:19:45.547 "traddr": "10.0.0.1", 00:19:45.547 "trsvcid": "35848" 00:19:45.547 }, 00:19:45.547 "auth": { 00:19:45.547 "state": "completed", 00:19:45.547 "digest": "sha384", 00:19:45.547 "dhgroup": "ffdhe3072" 00:19:45.547 } 00:19:45.547 } 00:19:45.547 ]' 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.547 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.805 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:45.805 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:46.739 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.997 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.254 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.512 00:19:47.512 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.512 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.512 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.771 { 00:19:47.771 "cntlid": 69, 00:19:47.771 "qid": 0, 00:19:47.771 "state": "enabled", 00:19:47.771 "thread": "nvmf_tgt_poll_group_000", 00:19:47.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.771 "listen_address": { 00:19:47.771 "trtype": "TCP", 00:19:47.771 "adrfam": "IPv4", 00:19:47.771 "traddr": "10.0.0.2", 00:19:47.771 "trsvcid": "4420" 00:19:47.771 }, 00:19:47.771 "peer_address": { 00:19:47.771 "trtype": "TCP", 00:19:47.771 "adrfam": "IPv4", 00:19:47.771 "traddr": "10.0.0.1", 00:19:47.771 "trsvcid": "35882" 00:19:47.771 }, 00:19:47.771 "auth": { 00:19:47.771 "state": "completed", 00:19:47.771 "digest": "sha384", 00:19:47.771 "dhgroup": "ffdhe3072" 00:19:47.771 } 00:19:47.771 } 00:19:47.771 ]' 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.771 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.029 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.029 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.029 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.287 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:48.287 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.220 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.478 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.736 00:19:49.736 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.736 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.736 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.993 { 00:19:49.993 "cntlid": 71, 00:19:49.993 "qid": 0, 00:19:49.993 "state": "enabled", 00:19:49.993 "thread": "nvmf_tgt_poll_group_000", 00:19:49.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.993 "listen_address": { 00:19:49.993 "trtype": "TCP", 00:19:49.993 "adrfam": "IPv4", 00:19:49.993 "traddr": "10.0.0.2", 00:19:49.993 "trsvcid": "4420" 00:19:49.993 }, 00:19:49.993 "peer_address": { 00:19:49.993 "trtype": "TCP", 00:19:49.993 "adrfam": "IPv4", 00:19:49.993 "traddr": "10.0.0.1", 00:19:49.993 "trsvcid": "35912" 00:19:49.993 }, 00:19:49.993 "auth": { 00:19:49.993 "state": "completed", 00:19:49.993 "digest": "sha384", 00:19:49.993 "dhgroup": "ffdhe3072" 00:19:49.993 } 00:19:49.993 } 00:19:49.993 ]' 00:19:49.993 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.250 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.508 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:50.508 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.441 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.699 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.264 00:19:52.264 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.264 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.264 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.523 { 00:19:52.523 "cntlid": 73, 00:19:52.523 "qid": 0, 00:19:52.523 "state": "enabled", 00:19:52.523 "thread": "nvmf_tgt_poll_group_000", 00:19:52.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.523 "listen_address": { 00:19:52.523 "trtype": "TCP", 00:19:52.523 "adrfam": "IPv4", 00:19:52.523 "traddr": "10.0.0.2", 00:19:52.523 "trsvcid": "4420" 00:19:52.523 }, 00:19:52.523 "peer_address": { 00:19:52.523 "trtype": "TCP", 00:19:52.523 "adrfam": "IPv4", 00:19:52.523 "traddr": "10.0.0.1", 00:19:52.523 "trsvcid": "35936" 00:19:52.523 }, 00:19:52.523 "auth": { 00:19:52.523 "state": "completed", 00:19:52.523 "digest": "sha384", 00:19:52.523 "dhgroup": "ffdhe4096" 00:19:52.523 } 00:19:52.523 } 00:19:52.523 ]' 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.523 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.780 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:52.781 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:53.711 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.277 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.535 00:19:54.535 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.535 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.535 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.794 { 00:19:54.794 "cntlid": 75, 00:19:54.794 "qid": 0, 00:19:54.794 "state": "enabled", 00:19:54.794 "thread": "nvmf_tgt_poll_group_000", 00:19:54.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.794 "listen_address": { 00:19:54.794 "trtype": "TCP", 00:19:54.794 "adrfam": "IPv4", 00:19:54.794 "traddr": "10.0.0.2", 00:19:54.794 "trsvcid": "4420" 00:19:54.794 }, 00:19:54.794 "peer_address": { 00:19:54.794 "trtype": "TCP", 00:19:54.794 "adrfam": "IPv4", 00:19:54.794 "traddr": "10.0.0.1", 00:19:54.794 "trsvcid": "40590" 00:19:54.794 }, 00:19:54.794 "auth": { 00:19:54.794 "state": "completed", 00:19:54.794 "digest": "sha384", 00:19:54.794 "dhgroup": "ffdhe4096" 00:19:54.794 } 00:19:54.794 } 00:19:54.794 ]' 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.794 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.052 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.052 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.052 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.309 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:55.309 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.243 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.501 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.067 00:19:57.067 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.067 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.067 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.067 { 00:19:57.067 "cntlid": 77, 00:19:57.067 "qid": 0, 00:19:57.067 "state": "enabled", 00:19:57.067 "thread": "nvmf_tgt_poll_group_000", 00:19:57.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.067 "listen_address": { 00:19:57.067 "trtype": "TCP", 00:19:57.067 "adrfam": "IPv4", 00:19:57.067 "traddr": "10.0.0.2", 00:19:57.067 "trsvcid": "4420" 00:19:57.067 }, 00:19:57.067 "peer_address": { 00:19:57.067 "trtype": "TCP", 00:19:57.067 "adrfam": "IPv4", 00:19:57.067 "traddr": "10.0.0.1", 00:19:57.067 "trsvcid": "40632" 00:19:57.067 }, 00:19:57.067 "auth": { 00:19:57.067 "state": "completed", 00:19:57.067 "digest": "sha384", 00:19:57.067 "dhgroup": "ffdhe4096" 00:19:57.067 } 00:19:57.067 } 00:19:57.067 ]' 00:19:57.067 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.325 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.584 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:57.584 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.518 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.776 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.342 00:19:59.342 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.342 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.342 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.601 { 00:19:59.601 "cntlid": 79, 00:19:59.601 "qid": 0, 00:19:59.601 "state": "enabled", 00:19:59.601 "thread": "nvmf_tgt_poll_group_000", 00:19:59.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.601 "listen_address": { 00:19:59.601 "trtype": "TCP", 00:19:59.601 "adrfam": "IPv4", 00:19:59.601 "traddr": "10.0.0.2", 00:19:59.601 "trsvcid": "4420" 00:19:59.601 }, 00:19:59.601 "peer_address": { 00:19:59.601 "trtype": "TCP", 00:19:59.601 "adrfam": "IPv4", 00:19:59.601 "traddr": "10.0.0.1", 00:19:59.601 "trsvcid": "40668" 00:19:59.601 }, 00:19:59.601 "auth": { 00:19:59.601 "state": "completed", 00:19:59.601 "digest": "sha384", 00:19:59.601 "dhgroup": "ffdhe4096" 00:19:59.601 } 00:19:59.601 } 00:19:59.601 ]' 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.601 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.859 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:19:59.859 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:00.794 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.360 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.929 00:20:01.929 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.929 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.929 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.188 { 00:20:02.188 "cntlid": 81, 00:20:02.188 "qid": 0, 00:20:02.188 "state": "enabled", 00:20:02.188 "thread": "nvmf_tgt_poll_group_000", 00:20:02.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:02.188 "listen_address": { 00:20:02.188 "trtype": "TCP", 00:20:02.188 "adrfam": "IPv4", 00:20:02.188 "traddr": "10.0.0.2", 00:20:02.188 "trsvcid": "4420" 00:20:02.188 }, 00:20:02.188 "peer_address": { 00:20:02.188 "trtype": "TCP", 00:20:02.188 "adrfam": "IPv4", 00:20:02.188 "traddr": "10.0.0.1", 00:20:02.188 "trsvcid": "40690" 00:20:02.188 }, 00:20:02.188 "auth": { 00:20:02.188 "state": "completed", 00:20:02.188 "digest": "sha384", 00:20:02.188 "dhgroup": "ffdhe6144" 00:20:02.188 } 00:20:02.188 } 00:20:02.188 ]' 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.188 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.446 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:02.446 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.426 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.685 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.620 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.620 { 00:20:04.620 "cntlid": 83, 00:20:04.620 "qid": 0, 00:20:04.620 "state": "enabled", 00:20:04.620 "thread": "nvmf_tgt_poll_group_000", 00:20:04.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:04.620 "listen_address": { 00:20:04.620 "trtype": "TCP", 00:20:04.620 "adrfam": "IPv4", 00:20:04.620 "traddr": "10.0.0.2", 00:20:04.620 "trsvcid": "4420" 00:20:04.620 }, 00:20:04.620 "peer_address": { 00:20:04.620 "trtype": "TCP", 00:20:04.620 "adrfam": "IPv4", 00:20:04.620 "traddr": "10.0.0.1", 00:20:04.620 "trsvcid": "38950" 00:20:04.620 }, 00:20:04.620 "auth": { 00:20:04.620 "state": "completed", 00:20:04.620 "digest": "sha384", 00:20:04.620 "dhgroup": "ffdhe6144" 00:20:04.620 } 00:20:04.620 } 00:20:04.620 ]' 00:20:04.620 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.878 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.136 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:05.136 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.072 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.641 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.210 00:20:07.210 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.210 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.210 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.468 { 00:20:07.468 "cntlid": 85, 00:20:07.468 "qid": 0, 00:20:07.468 "state": "enabled", 00:20:07.468 "thread": "nvmf_tgt_poll_group_000", 00:20:07.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.468 "listen_address": { 00:20:07.468 "trtype": "TCP", 00:20:07.468 "adrfam": "IPv4", 00:20:07.468 "traddr": "10.0.0.2", 00:20:07.468 "trsvcid": "4420" 00:20:07.468 }, 00:20:07.468 "peer_address": { 00:20:07.468 "trtype": "TCP", 00:20:07.468 "adrfam": "IPv4", 00:20:07.468 "traddr": "10.0.0.1", 00:20:07.468 "trsvcid": "38970" 00:20:07.468 }, 00:20:07.468 "auth": { 00:20:07.468 "state": "completed", 00:20:07.468 "digest": "sha384", 00:20:07.468 "dhgroup": "ffdhe6144" 00:20:07.468 } 00:20:07.468 } 00:20:07.468 ]' 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.468 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.727 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:07.727 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.103 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.103 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.104 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.104 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.104 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.672 00:20:09.672 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.672 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.672 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.930 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.930 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.930 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.930 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.930 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.930 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.930 { 00:20:09.930 "cntlid": 87, 00:20:09.930 "qid": 0, 00:20:09.930 "state": "enabled", 00:20:09.930 "thread": "nvmf_tgt_poll_group_000", 00:20:09.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:09.930 "listen_address": { 00:20:09.930 "trtype": "TCP", 00:20:09.930 "adrfam": "IPv4", 00:20:09.930 "traddr": "10.0.0.2", 00:20:09.930 "trsvcid": "4420" 00:20:09.930 }, 00:20:09.930 "peer_address": { 00:20:09.930 "trtype": "TCP", 00:20:09.930 "adrfam": "IPv4", 00:20:09.930 "traddr": "10.0.0.1", 00:20:09.930 "trsvcid": "39004" 00:20:09.930 }, 00:20:09.930 "auth": { 00:20:09.930 "state": "completed", 00:20:09.930 "digest": "sha384", 00:20:09.930 "dhgroup": "ffdhe6144" 00:20:09.930 } 00:20:09.930 } 00:20:09.930 ]' 00:20:09.930 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.188 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.446 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:10.446 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.384 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.642 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.580 00:20:12.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.837 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.837 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.837 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.837 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.837 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.837 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.837 { 00:20:12.837 "cntlid": 89, 00:20:12.837 "qid": 0, 00:20:12.838 "state": "enabled", 00:20:12.838 "thread": "nvmf_tgt_poll_group_000", 00:20:12.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.838 "listen_address": { 00:20:12.838 "trtype": "TCP", 00:20:12.838 "adrfam": "IPv4", 00:20:12.838 "traddr": "10.0.0.2", 00:20:12.838 "trsvcid": "4420" 00:20:12.838 }, 00:20:12.838 "peer_address": { 00:20:12.838 "trtype": "TCP", 00:20:12.838 "adrfam": "IPv4", 00:20:12.838 "traddr": "10.0.0.1", 00:20:12.838 "trsvcid": "39038" 00:20:12.838 }, 00:20:12.838 "auth": { 00:20:12.838 "state": "completed", 00:20:12.838 "digest": "sha384", 00:20:12.838 "dhgroup": "ffdhe8192" 00:20:12.838 } 00:20:12.838 } 00:20:12.838 ]' 00:20:12.838 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.838 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.838 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.095 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.095 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.095 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.095 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.095 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.353 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:13.353 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.289 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.858 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.795 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.795 { 00:20:15.795 "cntlid": 91, 00:20:15.795 "qid": 0, 00:20:15.795 "state": "enabled", 00:20:15.795 "thread": "nvmf_tgt_poll_group_000", 00:20:15.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.795 "listen_address": { 00:20:15.795 "trtype": "TCP", 00:20:15.795 "adrfam": "IPv4", 00:20:15.795 "traddr": "10.0.0.2", 00:20:15.795 "trsvcid": "4420" 00:20:15.795 }, 00:20:15.795 "peer_address": { 00:20:15.795 "trtype": "TCP", 00:20:15.795 "adrfam": "IPv4", 00:20:15.795 "traddr": "10.0.0.1", 00:20:15.795 "trsvcid": "54490" 00:20:15.795 }, 00:20:15.795 "auth": { 00:20:15.795 "state": "completed", 00:20:15.795 "digest": "sha384", 00:20:15.795 "dhgroup": "ffdhe8192" 00:20:15.795 } 00:20:15.795 } 00:20:15.795 ]' 00:20:15.795 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.053 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.053 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.053 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.053 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.053 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.053 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.053 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.313 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:16.313 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.252 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.511 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.449 00:20:18.449 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.449 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.449 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.707 { 00:20:18.707 "cntlid": 93, 00:20:18.707 "qid": 0, 00:20:18.707 "state": "enabled", 00:20:18.707 "thread": "nvmf_tgt_poll_group_000", 00:20:18.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.707 "listen_address": { 00:20:18.707 "trtype": "TCP", 00:20:18.707 "adrfam": "IPv4", 00:20:18.707 "traddr": "10.0.0.2", 00:20:18.707 "trsvcid": "4420" 00:20:18.707 }, 00:20:18.707 "peer_address": { 00:20:18.707 "trtype": "TCP", 00:20:18.707 "adrfam": "IPv4", 00:20:18.707 "traddr": "10.0.0.1", 00:20:18.707 "trsvcid": "54504" 00:20:18.707 }, 00:20:18.707 "auth": { 00:20:18.707 "state": "completed", 00:20:18.707 "digest": "sha384", 00:20:18.707 "dhgroup": "ffdhe8192" 00:20:18.707 } 00:20:18.707 } 00:20:18.707 ]' 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.707 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.965 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.965 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.965 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.965 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.965 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.223 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:19.223 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.160 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.419 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.677 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.677 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.677 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.616 00:20:21.616 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.616 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.616 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.875 { 00:20:21.875 "cntlid": 95, 00:20:21.875 "qid": 0, 00:20:21.875 "state": "enabled", 00:20:21.875 "thread": "nvmf_tgt_poll_group_000", 00:20:21.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.875 "listen_address": { 00:20:21.875 "trtype": "TCP", 00:20:21.875 "adrfam": "IPv4", 00:20:21.875 "traddr": "10.0.0.2", 00:20:21.875 "trsvcid": "4420" 00:20:21.875 }, 00:20:21.875 "peer_address": { 00:20:21.875 "trtype": "TCP", 00:20:21.875 "adrfam": "IPv4", 00:20:21.875 "traddr": "10.0.0.1", 00:20:21.875 "trsvcid": "54536" 00:20:21.875 }, 00:20:21.875 "auth": { 00:20:21.875 "state": "completed", 00:20:21.875 "digest": "sha384", 00:20:21.875 "dhgroup": "ffdhe8192" 00:20:21.875 } 00:20:21.875 } 00:20:21.875 ]' 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.875 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.133 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:22.133 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:23.073 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:23.330 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:23.330 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.330 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.331 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.896 00:20:23.896 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.896 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.896 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.155 { 00:20:24.155 "cntlid": 97, 00:20:24.155 "qid": 0, 00:20:24.155 "state": "enabled", 00:20:24.155 "thread": "nvmf_tgt_poll_group_000", 00:20:24.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.155 "listen_address": { 00:20:24.155 "trtype": "TCP", 00:20:24.155 "adrfam": "IPv4", 00:20:24.155 "traddr": "10.0.0.2", 00:20:24.155 "trsvcid": "4420" 00:20:24.155 }, 00:20:24.155 "peer_address": { 00:20:24.155 "trtype": "TCP", 00:20:24.155 "adrfam": "IPv4", 00:20:24.155 "traddr": "10.0.0.1", 00:20:24.155 "trsvcid": "47630" 00:20:24.155 }, 00:20:24.155 "auth": { 00:20:24.155 "state": "completed", 00:20:24.155 "digest": "sha512", 00:20:24.155 "dhgroup": "null" 00:20:24.155 } 00:20:24.155 } 00:20:24.155 ]' 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.155 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.413 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:24.413 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.352 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.611 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.870 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.870 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.870 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.870 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.129 00:20:26.129 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.129 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.129 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.388 { 00:20:26.388 "cntlid": 99, 00:20:26.388 "qid": 0, 00:20:26.388 "state": "enabled", 00:20:26.388 "thread": "nvmf_tgt_poll_group_000", 00:20:26.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.388 "listen_address": { 00:20:26.388 "trtype": "TCP", 00:20:26.388 "adrfam": "IPv4", 00:20:26.388 "traddr": "10.0.0.2", 00:20:26.388 "trsvcid": "4420" 00:20:26.388 }, 00:20:26.388 "peer_address": { 00:20:26.388 "trtype": "TCP", 00:20:26.388 "adrfam": "IPv4", 00:20:26.388 "traddr": "10.0.0.1", 00:20:26.388 "trsvcid": "47660" 00:20:26.388 }, 00:20:26.388 "auth": { 00:20:26.388 "state": "completed", 00:20:26.388 "digest": "sha512", 00:20:26.388 "dhgroup": "null" 00:20:26.388 } 00:20:26.388 } 00:20:26.388 ]' 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.388 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.648 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:26.648 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:27.583 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.154 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.413 00:20:28.413 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.413 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.413 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.672 { 00:20:28.672 "cntlid": 101, 00:20:28.672 "qid": 0, 00:20:28.672 "state": "enabled", 00:20:28.672 "thread": "nvmf_tgt_poll_group_000", 00:20:28.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.672 "listen_address": { 00:20:28.672 "trtype": "TCP", 00:20:28.672 "adrfam": "IPv4", 00:20:28.672 "traddr": "10.0.0.2", 00:20:28.672 "trsvcid": "4420" 00:20:28.672 }, 00:20:28.672 "peer_address": { 00:20:28.672 "trtype": "TCP", 00:20:28.672 "adrfam": "IPv4", 00:20:28.672 "traddr": "10.0.0.1", 00:20:28.672 "trsvcid": "47686" 00:20:28.672 }, 00:20:28.672 "auth": { 00:20:28.672 "state": "completed", 00:20:28.672 "digest": "sha512", 00:20:28.672 "dhgroup": "null" 00:20:28.672 } 00:20:28.672 } 00:20:28.672 ]' 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.672 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.955 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:28.955 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.952 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.209 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.468 00:20:30.468 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.468 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.468 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.727 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.727 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.727 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.727 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.986 { 00:20:30.986 "cntlid": 103, 00:20:30.986 "qid": 0, 00:20:30.986 "state": "enabled", 00:20:30.986 "thread": "nvmf_tgt_poll_group_000", 00:20:30.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:30.986 "listen_address": { 00:20:30.986 "trtype": "TCP", 00:20:30.986 "adrfam": "IPv4", 00:20:30.986 "traddr": "10.0.0.2", 00:20:30.986 "trsvcid": "4420" 00:20:30.986 }, 00:20:30.986 "peer_address": { 00:20:30.986 "trtype": "TCP", 00:20:30.986 "adrfam": "IPv4", 00:20:30.986 "traddr": "10.0.0.1", 00:20:30.986 "trsvcid": "47714" 00:20:30.986 }, 00:20:30.986 "auth": { 00:20:30.986 "state": "completed", 00:20:30.986 "digest": "sha512", 00:20:30.986 "dhgroup": "null" 00:20:30.986 } 00:20:30.986 } 00:20:30.986 ]' 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.986 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.245 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:31.245 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.183 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.441 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.010 00:20:33.010 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.010 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.010 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.271 { 00:20:33.271 "cntlid": 105, 00:20:33.271 "qid": 0, 00:20:33.271 "state": "enabled", 00:20:33.271 "thread": "nvmf_tgt_poll_group_000", 00:20:33.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:33.271 "listen_address": { 00:20:33.271 "trtype": "TCP", 00:20:33.271 "adrfam": "IPv4", 00:20:33.271 "traddr": "10.0.0.2", 00:20:33.271 "trsvcid": "4420" 00:20:33.271 }, 00:20:33.271 "peer_address": { 00:20:33.271 "trtype": "TCP", 00:20:33.271 "adrfam": "IPv4", 00:20:33.271 "traddr": "10.0.0.1", 00:20:33.271 "trsvcid": "35994" 00:20:33.271 }, 00:20:33.271 "auth": { 00:20:33.271 "state": "completed", 00:20:33.271 "digest": "sha512", 00:20:33.271 "dhgroup": "ffdhe2048" 00:20:33.271 } 00:20:33.271 } 00:20:33.271 ]' 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.271 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.529 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:33.529 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.460 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.718 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.329 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.329 { 00:20:35.329 "cntlid": 107, 00:20:35.329 "qid": 0, 00:20:35.329 "state": "enabled", 00:20:35.329 "thread": "nvmf_tgt_poll_group_000", 00:20:35.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:35.329 "listen_address": { 00:20:35.329 "trtype": "TCP", 00:20:35.329 "adrfam": "IPv4", 00:20:35.329 "traddr": "10.0.0.2", 00:20:35.329 "trsvcid": "4420" 00:20:35.329 }, 00:20:35.329 "peer_address": { 00:20:35.329 "trtype": "TCP", 00:20:35.329 "adrfam": "IPv4", 00:20:35.329 "traddr": "10.0.0.1", 00:20:35.329 "trsvcid": "36006" 00:20:35.329 }, 00:20:35.329 "auth": { 00:20:35.329 "state": "completed", 00:20:35.329 "digest": "sha512", 00:20:35.329 "dhgroup": "ffdhe2048" 00:20:35.329 } 00:20:35.329 } 00:20:35.329 ]' 00:20:35.329 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.586 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.844 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:35.844 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:36.778 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.778 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.778 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.778 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.037 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.037 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.037 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.037 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.296 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.555 00:20:37.555 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.555 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.555 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.813 { 00:20:37.813 "cntlid": 109, 00:20:37.813 "qid": 0, 00:20:37.813 "state": "enabled", 00:20:37.813 "thread": "nvmf_tgt_poll_group_000", 00:20:37.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.813 "listen_address": { 00:20:37.813 "trtype": "TCP", 00:20:37.813 "adrfam": "IPv4", 00:20:37.813 "traddr": "10.0.0.2", 00:20:37.813 "trsvcid": "4420" 00:20:37.813 }, 00:20:37.813 "peer_address": { 00:20:37.813 "trtype": "TCP", 00:20:37.813 "adrfam": "IPv4", 00:20:37.813 "traddr": "10.0.0.1", 00:20:37.813 "trsvcid": "36038" 00:20:37.813 }, 00:20:37.813 "auth": { 00:20:37.813 "state": "completed", 00:20:37.813 "digest": "sha512", 00:20:37.813 "dhgroup": "ffdhe2048" 00:20:37.813 } 00:20:37.813 } 00:20:37.813 ]' 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.813 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.075 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.075 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.075 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.333 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:38.333 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.270 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.529 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.788 00:20:40.047 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.047 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.047 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.305 { 00:20:40.305 "cntlid": 111, 00:20:40.305 "qid": 0, 00:20:40.305 "state": "enabled", 00:20:40.305 "thread": "nvmf_tgt_poll_group_000", 00:20:40.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.305 "listen_address": { 00:20:40.305 "trtype": "TCP", 00:20:40.305 "adrfam": "IPv4", 00:20:40.305 "traddr": "10.0.0.2", 00:20:40.305 "trsvcid": "4420" 00:20:40.305 }, 00:20:40.305 "peer_address": { 00:20:40.305 "trtype": "TCP", 00:20:40.305 "adrfam": "IPv4", 00:20:40.305 "traddr": "10.0.0.1", 00:20:40.305 "trsvcid": "36076" 00:20:40.305 }, 00:20:40.305 "auth": { 00:20:40.305 "state": "completed", 00:20:40.305 "digest": "sha512", 00:20:40.305 "dhgroup": "ffdhe2048" 00:20:40.305 } 00:20:40.305 } 00:20:40.305 ]' 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.305 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.306 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.306 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.306 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.306 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.566 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:40.566 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.944 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.202 00:20:42.460 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.460 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.460 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.719 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.719 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.719 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.719 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.719 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.719 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.719 { 00:20:42.719 "cntlid": 113, 00:20:42.719 "qid": 0, 00:20:42.719 "state": "enabled", 00:20:42.719 "thread": "nvmf_tgt_poll_group_000", 00:20:42.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.719 "listen_address": { 00:20:42.719 "trtype": "TCP", 00:20:42.719 "adrfam": "IPv4", 00:20:42.719 "traddr": "10.0.0.2", 00:20:42.719 "trsvcid": "4420" 00:20:42.720 }, 00:20:42.720 "peer_address": { 00:20:42.720 "trtype": "TCP", 00:20:42.720 "adrfam": "IPv4", 00:20:42.720 "traddr": "10.0.0.1", 00:20:42.720 "trsvcid": "36098" 00:20:42.720 }, 00:20:42.720 "auth": { 00:20:42.720 "state": "completed", 00:20:42.720 "digest": "sha512", 00:20:42.720 "dhgroup": "ffdhe3072" 00:20:42.720 } 00:20:42.720 } 00:20:42.720 ]' 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.720 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.979 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:42.979 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:43.912 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.170 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.428 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.687 00:20:44.687 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.687 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.687 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.945 { 00:20:44.945 "cntlid": 115, 00:20:44.945 "qid": 0, 00:20:44.945 "state": "enabled", 00:20:44.945 "thread": "nvmf_tgt_poll_group_000", 00:20:44.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.945 "listen_address": { 00:20:44.945 "trtype": "TCP", 00:20:44.945 "adrfam": "IPv4", 00:20:44.945 "traddr": "10.0.0.2", 00:20:44.945 "trsvcid": "4420" 00:20:44.945 }, 00:20:44.945 "peer_address": { 00:20:44.945 "trtype": "TCP", 00:20:44.945 "adrfam": "IPv4", 00:20:44.945 "traddr": "10.0.0.1", 00:20:44.945 "trsvcid": "40148" 00:20:44.945 }, 00:20:44.945 "auth": { 00:20:44.945 "state": "completed", 00:20:44.945 "digest": "sha512", 00:20:44.945 "dhgroup": "ffdhe3072" 00:20:44.945 } 00:20:44.945 } 00:20:44.945 ]' 00:20:44.945 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.945 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.946 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.946 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.946 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.204 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.204 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.204 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.462 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:45.462 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.399 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.657 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.915 00:20:46.915 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.915 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.915 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.173 { 00:20:47.173 "cntlid": 117, 00:20:47.173 "qid": 0, 00:20:47.173 "state": "enabled", 00:20:47.173 "thread": "nvmf_tgt_poll_group_000", 00:20:47.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.173 "listen_address": { 00:20:47.173 "trtype": "TCP", 00:20:47.173 "adrfam": "IPv4", 00:20:47.173 "traddr": "10.0.0.2", 00:20:47.173 "trsvcid": "4420" 00:20:47.173 }, 00:20:47.173 "peer_address": { 00:20:47.173 "trtype": "TCP", 00:20:47.173 "adrfam": "IPv4", 00:20:47.173 "traddr": "10.0.0.1", 00:20:47.173 "trsvcid": "40186" 00:20:47.173 }, 00:20:47.173 "auth": { 00:20:47.173 "state": "completed", 00:20:47.173 "digest": "sha512", 00:20:47.173 "dhgroup": "ffdhe3072" 00:20:47.173 } 00:20:47.173 } 00:20:47.173 ]' 00:20:47.173 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.431 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.688 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:47.688 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.623 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.881 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.446 00:20:49.446 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.446 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.446 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.703 { 00:20:49.703 "cntlid": 119, 00:20:49.703 "qid": 0, 00:20:49.703 "state": "enabled", 00:20:49.703 "thread": "nvmf_tgt_poll_group_000", 00:20:49.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.703 "listen_address": { 00:20:49.703 "trtype": "TCP", 00:20:49.703 "adrfam": "IPv4", 00:20:49.703 "traddr": "10.0.0.2", 00:20:49.703 "trsvcid": "4420" 00:20:49.703 }, 00:20:49.703 "peer_address": { 00:20:49.703 "trtype": "TCP", 00:20:49.703 "adrfam": "IPv4", 00:20:49.703 "traddr": "10.0.0.1", 00:20:49.703 "trsvcid": "40206" 00:20:49.703 }, 00:20:49.703 "auth": { 00:20:49.703 "state": "completed", 00:20:49.703 "digest": "sha512", 00:20:49.703 "dhgroup": "ffdhe3072" 00:20:49.703 } 00:20:49.703 } 00:20:49.703 ]' 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.703 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.960 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:49.960 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:50.892 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.892 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.892 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.892 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.151 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.151 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.151 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.151 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:51.151 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.409 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.974 00:20:51.974 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.974 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.974 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.974 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.974 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.974 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.974 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.974 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.233 { 00:20:52.233 "cntlid": 121, 00:20:52.233 "qid": 0, 00:20:52.233 "state": "enabled", 00:20:52.233 "thread": "nvmf_tgt_poll_group_000", 00:20:52.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.233 "listen_address": { 00:20:52.233 "trtype": "TCP", 00:20:52.233 "adrfam": "IPv4", 00:20:52.233 "traddr": "10.0.0.2", 00:20:52.233 "trsvcid": "4420" 00:20:52.233 }, 00:20:52.233 "peer_address": { 00:20:52.233 "trtype": "TCP", 00:20:52.233 "adrfam": "IPv4", 00:20:52.233 "traddr": "10.0.0.1", 00:20:52.233 "trsvcid": "40214" 00:20:52.233 }, 00:20:52.233 "auth": { 00:20:52.233 "state": "completed", 00:20:52.233 "digest": "sha512", 00:20:52.233 "dhgroup": "ffdhe4096" 00:20:52.233 } 00:20:52.233 } 00:20:52.233 ]' 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.233 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.491 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:52.491 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:20:53.424 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.424 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.424 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.424 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.425 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.425 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.425 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:53.425 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.683 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.326 00:20:54.326 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.326 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.326 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.584 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.585 { 00:20:54.585 "cntlid": 123, 00:20:54.585 "qid": 0, 00:20:54.585 "state": "enabled", 00:20:54.585 "thread": "nvmf_tgt_poll_group_000", 00:20:54.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.585 "listen_address": { 00:20:54.585 "trtype": "TCP", 00:20:54.585 "adrfam": "IPv4", 00:20:54.585 "traddr": "10.0.0.2", 00:20:54.585 "trsvcid": "4420" 00:20:54.585 }, 00:20:54.585 "peer_address": { 00:20:54.585 "trtype": "TCP", 00:20:54.585 "adrfam": "IPv4", 00:20:54.585 "traddr": "10.0.0.1", 00:20:54.585 "trsvcid": "42242" 00:20:54.585 }, 00:20:54.585 "auth": { 00:20:54.585 "state": "completed", 00:20:54.585 "digest": "sha512", 00:20:54.585 "dhgroup": "ffdhe4096" 00:20:54.585 } 00:20:54.585 } 00:20:54.585 ]' 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.585 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.843 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:54.843 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.779 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:56.036 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.037 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.294 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.294 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.294 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.294 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.552 00:20:56.552 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.553 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.553 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.811 { 00:20:56.811 "cntlid": 125, 00:20:56.811 "qid": 0, 00:20:56.811 "state": "enabled", 00:20:56.811 "thread": "nvmf_tgt_poll_group_000", 00:20:56.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:56.811 "listen_address": { 00:20:56.811 "trtype": "TCP", 00:20:56.811 "adrfam": "IPv4", 00:20:56.811 "traddr": "10.0.0.2", 00:20:56.811 "trsvcid": "4420" 00:20:56.811 }, 00:20:56.811 "peer_address": { 00:20:56.811 "trtype": "TCP", 00:20:56.811 "adrfam": "IPv4", 00:20:56.811 "traddr": "10.0.0.1", 00:20:56.811 "trsvcid": "42282" 00:20:56.811 }, 00:20:56.811 "auth": { 00:20:56.811 "state": "completed", 00:20:56.811 "digest": "sha512", 00:20:56.811 "dhgroup": "ffdhe4096" 00:20:56.811 } 00:20:56.811 } 00:20:56.811 ]' 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.811 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.068 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.068 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.068 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.068 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.068 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.326 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:57.326 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.260 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.518 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.084 00:20:59.084 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.084 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.084 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.343 { 00:20:59.343 "cntlid": 127, 00:20:59.343 "qid": 0, 00:20:59.343 "state": "enabled", 00:20:59.343 "thread": "nvmf_tgt_poll_group_000", 00:20:59.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.343 "listen_address": { 00:20:59.343 "trtype": "TCP", 00:20:59.343 "adrfam": "IPv4", 00:20:59.343 "traddr": "10.0.0.2", 00:20:59.343 "trsvcid": "4420" 00:20:59.343 }, 00:20:59.343 "peer_address": { 00:20:59.343 "trtype": "TCP", 00:20:59.343 "adrfam": "IPv4", 00:20:59.343 "traddr": "10.0.0.1", 00:20:59.343 "trsvcid": "42310" 00:20:59.343 }, 00:20:59.343 "auth": { 00:20:59.343 "state": "completed", 00:20:59.343 "digest": "sha512", 00:20:59.343 "dhgroup": "ffdhe4096" 00:20:59.343 } 00:20:59.343 } 00:20:59.343 ]' 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.343 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.601 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:20:59.601 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.535 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.796 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.054 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.054 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.054 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.054 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.620 00:21:01.620 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.620 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.620 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.879 { 00:21:01.879 "cntlid": 129, 00:21:01.879 "qid": 0, 00:21:01.879 "state": "enabled", 00:21:01.879 "thread": "nvmf_tgt_poll_group_000", 00:21:01.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.879 "listen_address": { 00:21:01.879 "trtype": "TCP", 00:21:01.879 "adrfam": "IPv4", 00:21:01.879 "traddr": "10.0.0.2", 00:21:01.879 "trsvcid": "4420" 00:21:01.879 }, 00:21:01.879 "peer_address": { 00:21:01.879 "trtype": "TCP", 00:21:01.879 "adrfam": "IPv4", 00:21:01.879 "traddr": "10.0.0.1", 00:21:01.879 "trsvcid": "42330" 00:21:01.879 }, 00:21:01.879 "auth": { 00:21:01.879 "state": "completed", 00:21:01.879 "digest": "sha512", 00:21:01.879 "dhgroup": "ffdhe6144" 00:21:01.879 } 00:21:01.879 } 00:21:01.879 ]' 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.879 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.138 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:21:02.138 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.072 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.330 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.895 00:21:04.153 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.153 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.153 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.411 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.411 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.411 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.411 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.411 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.411 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.411 { 00:21:04.411 "cntlid": 131, 00:21:04.411 "qid": 0, 00:21:04.411 "state": "enabled", 00:21:04.412 "thread": "nvmf_tgt_poll_group_000", 00:21:04.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.412 "listen_address": { 00:21:04.412 "trtype": "TCP", 00:21:04.412 "adrfam": "IPv4", 00:21:04.412 "traddr": "10.0.0.2", 00:21:04.412 "trsvcid": "4420" 00:21:04.412 }, 00:21:04.412 "peer_address": { 00:21:04.412 "trtype": "TCP", 00:21:04.412 "adrfam": "IPv4", 00:21:04.412 "traddr": "10.0.0.1", 00:21:04.412 "trsvcid": "51522" 00:21:04.412 }, 00:21:04.412 "auth": { 00:21:04.412 "state": "completed", 00:21:04.412 "digest": "sha512", 00:21:04.412 "dhgroup": "ffdhe6144" 00:21:04.412 } 00:21:04.412 } 00:21:04.412 ]' 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.412 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.670 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:21:04.671 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.605 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.863 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.429 00:21:06.429 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.429 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.429 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.994 { 00:21:06.994 "cntlid": 133, 00:21:06.994 "qid": 0, 00:21:06.994 "state": "enabled", 00:21:06.994 "thread": "nvmf_tgt_poll_group_000", 00:21:06.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.994 "listen_address": { 00:21:06.994 "trtype": "TCP", 00:21:06.994 "adrfam": "IPv4", 00:21:06.994 "traddr": "10.0.0.2", 00:21:06.994 "trsvcid": "4420" 00:21:06.994 }, 00:21:06.994 "peer_address": { 00:21:06.994 "trtype": "TCP", 00:21:06.994 "adrfam": "IPv4", 00:21:06.994 "traddr": "10.0.0.1", 00:21:06.994 "trsvcid": "51552" 00:21:06.994 }, 00:21:06.994 "auth": { 00:21:06.994 "state": "completed", 00:21:06.994 "digest": "sha512", 00:21:06.994 "dhgroup": "ffdhe6144" 00:21:06.994 } 00:21:06.994 } 00:21:06.994 ]' 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.994 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.252 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:21:07.252 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.186 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.444 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.010 00:21:09.010 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.010 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.010 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.576 { 00:21:09.576 "cntlid": 135, 00:21:09.576 "qid": 0, 00:21:09.576 "state": "enabled", 00:21:09.576 "thread": "nvmf_tgt_poll_group_000", 00:21:09.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.576 "listen_address": { 00:21:09.576 "trtype": "TCP", 00:21:09.576 "adrfam": "IPv4", 00:21:09.576 "traddr": "10.0.0.2", 00:21:09.576 "trsvcid": "4420" 00:21:09.576 }, 00:21:09.576 "peer_address": { 00:21:09.576 "trtype": "TCP", 00:21:09.576 "adrfam": "IPv4", 00:21:09.576 "traddr": "10.0.0.1", 00:21:09.576 "trsvcid": "51588" 00:21:09.576 }, 00:21:09.576 "auth": { 00:21:09.576 "state": "completed", 00:21:09.576 "digest": "sha512", 00:21:09.576 "dhgroup": "ffdhe6144" 00:21:09.576 } 00:21:09.576 } 00:21:09.576 ]' 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.576 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.577 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.577 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.835 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:09.835 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.768 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.334 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.267 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.267 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.267 { 00:21:12.267 "cntlid": 137, 00:21:12.267 "qid": 0, 00:21:12.268 "state": "enabled", 00:21:12.268 "thread": "nvmf_tgt_poll_group_000", 00:21:12.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.268 "listen_address": { 00:21:12.268 "trtype": "TCP", 00:21:12.268 "adrfam": "IPv4", 00:21:12.268 "traddr": "10.0.0.2", 00:21:12.268 "trsvcid": "4420" 00:21:12.268 }, 00:21:12.268 "peer_address": { 00:21:12.268 "trtype": "TCP", 00:21:12.268 "adrfam": "IPv4", 00:21:12.268 "traddr": "10.0.0.1", 00:21:12.268 "trsvcid": "51606" 00:21:12.268 }, 00:21:12.268 "auth": { 00:21:12.268 "state": "completed", 00:21:12.268 "digest": "sha512", 00:21:12.268 "dhgroup": "ffdhe8192" 00:21:12.268 } 00:21:12.268 } 00:21:12.268 ]' 00:21:12.268 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.268 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.526 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.526 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:12.526 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.526 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.526 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.526 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.784 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:21:12.784 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.887 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.145 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.081 00:21:15.081 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.081 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.081 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.339 { 00:21:15.339 "cntlid": 139, 00:21:15.339 "qid": 0, 00:21:15.339 "state": "enabled", 00:21:15.339 "thread": "nvmf_tgt_poll_group_000", 00:21:15.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:15.339 "listen_address": { 00:21:15.339 "trtype": "TCP", 00:21:15.339 "adrfam": "IPv4", 00:21:15.339 "traddr": "10.0.0.2", 00:21:15.339 "trsvcid": "4420" 00:21:15.339 }, 00:21:15.339 "peer_address": { 00:21:15.339 "trtype": "TCP", 00:21:15.339 "adrfam": "IPv4", 00:21:15.339 "traddr": "10.0.0.1", 00:21:15.339 "trsvcid": "60372" 00:21:15.339 }, 00:21:15.339 "auth": { 00:21:15.339 "state": "completed", 00:21:15.339 "digest": "sha512", 00:21:15.339 "dhgroup": "ffdhe8192" 00:21:15.339 } 00:21:15.339 } 00:21:15.339 ]' 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.339 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.597 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.597 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.597 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.856 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:21:15.856 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: --dhchap-ctrl-secret DHHC-1:02:ZDg0NmNiZTY2MWNmMzAwZTk4ZGFjZTJmZTMyZjI3OTBmZGNiY2RiZmE3Zjg4ODU1iMLrnQ==: 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.790 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.049 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.983 00:21:17.983 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.983 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.983 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.241 { 00:21:18.241 "cntlid": 141, 00:21:18.241 "qid": 0, 00:21:18.241 "state": "enabled", 00:21:18.241 "thread": "nvmf_tgt_poll_group_000", 00:21:18.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.241 "listen_address": { 00:21:18.241 "trtype": "TCP", 00:21:18.241 "adrfam": "IPv4", 00:21:18.241 "traddr": "10.0.0.2", 00:21:18.241 "trsvcid": "4420" 00:21:18.241 }, 00:21:18.241 "peer_address": { 00:21:18.241 "trtype": "TCP", 00:21:18.241 "adrfam": "IPv4", 00:21:18.241 "traddr": "10.0.0.1", 00:21:18.241 "trsvcid": "60406" 00:21:18.241 }, 00:21:18.241 "auth": { 00:21:18.241 "state": "completed", 00:21:18.241 "digest": "sha512", 00:21:18.241 "dhgroup": "ffdhe8192" 00:21:18.241 } 00:21:18.241 } 00:21:18.241 ]' 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.241 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.807 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:21:18.807 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:01:MDIxMTdjODhhYjBmODg2ZGZmOTlhMGJhNzhiZjA0YTnfmpt4: 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.743 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.001 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.935 00:21:20.935 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.935 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.935 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.193 { 00:21:21.193 "cntlid": 143, 00:21:21.193 "qid": 0, 00:21:21.193 "state": "enabled", 00:21:21.193 "thread": "nvmf_tgt_poll_group_000", 00:21:21.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.193 "listen_address": { 00:21:21.193 "trtype": "TCP", 00:21:21.193 "adrfam": "IPv4", 00:21:21.193 "traddr": "10.0.0.2", 00:21:21.193 "trsvcid": "4420" 00:21:21.193 }, 00:21:21.193 "peer_address": { 00:21:21.193 "trtype": "TCP", 00:21:21.193 "adrfam": "IPv4", 00:21:21.193 "traddr": "10.0.0.1", 00:21:21.193 "trsvcid": "60420" 00:21:21.193 }, 00:21:21.193 "auth": { 00:21:21.193 "state": "completed", 00:21:21.193 "digest": "sha512", 00:21:21.193 "dhgroup": "ffdhe8192" 00:21:21.193 } 00:21:21.193 } 00:21:21.193 ]' 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.193 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.452 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:21.452 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:22.386 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:22.387 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:22.644 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:22.644 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.644 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.644 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.644 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.645 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.586 00:21:23.586 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.586 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.586 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.844 { 00:21:23.844 "cntlid": 145, 00:21:23.844 "qid": 0, 00:21:23.844 "state": "enabled", 00:21:23.844 "thread": "nvmf_tgt_poll_group_000", 00:21:23.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.844 "listen_address": { 00:21:23.844 "trtype": "TCP", 00:21:23.844 "adrfam": "IPv4", 00:21:23.844 "traddr": "10.0.0.2", 00:21:23.844 "trsvcid": "4420" 00:21:23.844 }, 00:21:23.844 "peer_address": { 00:21:23.844 "trtype": "TCP", 00:21:23.844 "adrfam": "IPv4", 00:21:23.844 "traddr": "10.0.0.1", 00:21:23.844 "trsvcid": "56678" 00:21:23.844 }, 00:21:23.844 "auth": { 00:21:23.844 "state": "completed", 00:21:23.844 "digest": "sha512", 00:21:23.844 "dhgroup": "ffdhe8192" 00:21:23.844 } 00:21:23.844 } 00:21:23.844 ]' 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.844 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.102 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.102 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.102 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.102 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.102 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.360 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:21:24.360 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDAyMGJiNmQ3MzBjYjE1NWJmOTVhZTFjMDMwYTE1YWQ0YzZhMzI5YjNhZDY2ZGM5J/ajjg==: --dhchap-ctrl-secret DHHC-1:03:ZTFlZjczYmMxYTEzODI2MjEwY2I4MmYzYzdiMTk0MTJiZDhiYzYyOTlmMTZmMjRhYTU3YzkwMGZiNDc5ZWEzMv02Kxs=: 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:25.292 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:26.224 request: 00:21:26.224 { 00:21:26.224 "name": "nvme0", 00:21:26.224 "trtype": "tcp", 00:21:26.224 "traddr": "10.0.0.2", 00:21:26.224 "adrfam": "ipv4", 00:21:26.224 "trsvcid": "4420", 00:21:26.224 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:26.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:26.224 "prchk_reftag": false, 00:21:26.224 "prchk_guard": false, 00:21:26.224 "hdgst": false, 00:21:26.224 "ddgst": false, 00:21:26.224 "dhchap_key": "key2", 00:21:26.224 "allow_unrecognized_csi": false, 00:21:26.224 "method": "bdev_nvme_attach_controller", 00:21:26.224 "req_id": 1 00:21:26.224 } 00:21:26.224 Got JSON-RPC error response 00:21:26.224 response: 00:21:26.224 { 00:21:26.224 "code": -5, 00:21:26.224 "message": "Input/output error" 00:21:26.224 } 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:26.224 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:26.225 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:27.160 request: 00:21:27.160 { 00:21:27.160 "name": "nvme0", 00:21:27.160 "trtype": "tcp", 00:21:27.160 "traddr": "10.0.0.2", 00:21:27.160 "adrfam": "ipv4", 00:21:27.160 "trsvcid": "4420", 00:21:27.160 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:27.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.160 "prchk_reftag": false, 00:21:27.160 "prchk_guard": false, 00:21:27.160 "hdgst": false, 00:21:27.160 "ddgst": false, 00:21:27.160 "dhchap_key": "key1", 00:21:27.160 "dhchap_ctrlr_key": "ckey2", 00:21:27.160 "allow_unrecognized_csi": false, 00:21:27.160 "method": "bdev_nvme_attach_controller", 00:21:27.160 "req_id": 1 00:21:27.160 } 00:21:27.160 Got JSON-RPC error response 00:21:27.160 response: 00:21:27.160 { 00:21:27.160 "code": -5, 00:21:27.160 "message": "Input/output error" 00:21:27.160 } 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.160 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.094 request: 00:21:28.094 { 00:21:28.094 "name": "nvme0", 00:21:28.094 "trtype": "tcp", 00:21:28.094 "traddr": "10.0.0.2", 00:21:28.094 "adrfam": "ipv4", 00:21:28.094 "trsvcid": "4420", 00:21:28.094 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:28.094 "prchk_reftag": false, 00:21:28.094 "prchk_guard": false, 00:21:28.094 "hdgst": false, 00:21:28.094 "ddgst": false, 00:21:28.094 "dhchap_key": "key1", 00:21:28.094 "dhchap_ctrlr_key": "ckey1", 00:21:28.094 "allow_unrecognized_csi": false, 00:21:28.094 "method": "bdev_nvme_attach_controller", 00:21:28.094 "req_id": 1 00:21:28.094 } 00:21:28.094 Got JSON-RPC error response 00:21:28.094 response: 00:21:28.094 { 00:21:28.094 "code": -5, 00:21:28.094 "message": "Input/output error" 00:21:28.094 } 00:21:28.094 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:28.094 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:28.094 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:28.094 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:28.094 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.094 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2470320 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2470320 ']' 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2470320 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.095 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470320 00:21:28.095 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.095 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.095 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470320' 00:21:28.095 killing process with pid 2470320 00:21:28.095 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2470320 00:21:28.095 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2470320 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2493858 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2493858 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2493858 ']' 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.471 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2493858 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2493858 ']' 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.405 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.663 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.663 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:30.663 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:30.663 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.663 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.922 null0 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xcL 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.SAU ]] 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SAU 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.922 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Vpu 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.bK4 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bK4 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.XhG 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.3kT ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3kT 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.pTo 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.181 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.182 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.182 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.557 nvme0n1 00:21:32.557 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.557 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.557 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.816 { 00:21:32.816 "cntlid": 1, 00:21:32.816 "qid": 0, 00:21:32.816 "state": "enabled", 00:21:32.816 "thread": "nvmf_tgt_poll_group_000", 00:21:32.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.816 "listen_address": { 00:21:32.816 "trtype": "TCP", 00:21:32.816 "adrfam": "IPv4", 00:21:32.816 "traddr": "10.0.0.2", 00:21:32.816 "trsvcid": "4420" 00:21:32.816 }, 00:21:32.816 "peer_address": { 00:21:32.816 "trtype": "TCP", 00:21:32.816 "adrfam": "IPv4", 00:21:32.816 "traddr": "10.0.0.1", 00:21:32.816 "trsvcid": "56728" 00:21:32.816 }, 00:21:32.816 "auth": { 00:21:32.816 "state": "completed", 00:21:32.816 "digest": "sha512", 00:21:32.816 "dhgroup": "ffdhe8192" 00:21:32.816 } 00:21:32.816 } 00:21:32.816 ]' 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.816 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.074 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.074 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.074 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.331 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:33.331 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:34.265 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.523 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.781 request: 00:21:34.781 { 00:21:34.781 "name": "nvme0", 00:21:34.781 "trtype": "tcp", 00:21:34.781 "traddr": "10.0.0.2", 00:21:34.781 "adrfam": "ipv4", 00:21:34.781 "trsvcid": "4420", 00:21:34.782 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.782 "prchk_reftag": false, 00:21:34.782 "prchk_guard": false, 00:21:34.782 "hdgst": false, 00:21:34.782 "ddgst": false, 00:21:34.782 "dhchap_key": "key3", 00:21:34.782 "allow_unrecognized_csi": false, 00:21:34.782 "method": "bdev_nvme_attach_controller", 00:21:34.782 "req_id": 1 00:21:34.782 } 00:21:34.782 Got JSON-RPC error response 00:21:34.782 response: 00:21:34.782 { 00:21:34.782 "code": -5, 00:21:34.782 "message": "Input/output error" 00:21:34.782 } 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.782 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.040 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.298 request: 00:21:35.298 { 00:21:35.298 "name": "nvme0", 00:21:35.298 "trtype": "tcp", 00:21:35.298 "traddr": "10.0.0.2", 00:21:35.298 "adrfam": "ipv4", 00:21:35.298 "trsvcid": "4420", 00:21:35.298 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.298 "prchk_reftag": false, 00:21:35.298 "prchk_guard": false, 00:21:35.298 "hdgst": false, 00:21:35.298 "ddgst": false, 00:21:35.298 "dhchap_key": "key3", 00:21:35.298 "allow_unrecognized_csi": false, 00:21:35.298 "method": "bdev_nvme_attach_controller", 00:21:35.298 "req_id": 1 00:21:35.298 } 00:21:35.298 Got JSON-RPC error response 00:21:35.298 response: 00:21:35.298 { 00:21:35.298 "code": -5, 00:21:35.298 "message": "Input/output error" 00:21:35.298 } 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.298 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.556 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.556 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.557 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:36.123 request: 00:21:36.123 { 00:21:36.123 "name": "nvme0", 00:21:36.123 "trtype": "tcp", 00:21:36.123 "traddr": "10.0.0.2", 00:21:36.123 "adrfam": "ipv4", 00:21:36.123 "trsvcid": "4420", 00:21:36.123 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:36.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.123 "prchk_reftag": false, 00:21:36.123 "prchk_guard": false, 00:21:36.123 "hdgst": false, 00:21:36.123 "ddgst": false, 00:21:36.123 "dhchap_key": "key0", 00:21:36.123 "dhchap_ctrlr_key": "key1", 00:21:36.123 "allow_unrecognized_csi": false, 00:21:36.123 "method": "bdev_nvme_attach_controller", 00:21:36.123 "req_id": 1 00:21:36.123 } 00:21:36.123 Got JSON-RPC error response 00:21:36.123 response: 00:21:36.123 { 00:21:36.123 "code": -5, 00:21:36.123 "message": "Input/output error" 00:21:36.123 } 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:36.123 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:36.690 nvme0n1 00:21:36.690 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:36.690 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:36.690 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.947 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.947 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.947 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:37.206 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:38.580 nvme0n1 00:21:38.580 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:38.580 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:38.580 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:38.839 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.097 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.097 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:39.097 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: --dhchap-ctrl-secret DHHC-1:03:NWFjMzg3YzNmODk5OTExM2U2Y2ZmYTAyOGYyOTU5ODIwYzdjMDQ3OGIyZWIwN2U0ZjczNTRiZjliNGYwOTAxOHbsp0w=: 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.030 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:40.289 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:41.223 request: 00:21:41.223 { 00:21:41.223 "name": "nvme0", 00:21:41.223 "trtype": "tcp", 00:21:41.223 "traddr": "10.0.0.2", 00:21:41.223 "adrfam": "ipv4", 00:21:41.223 "trsvcid": "4420", 00:21:41.223 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:41.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.223 "prchk_reftag": false, 00:21:41.223 "prchk_guard": false, 00:21:41.223 "hdgst": false, 00:21:41.223 "ddgst": false, 00:21:41.223 "dhchap_key": "key1", 00:21:41.223 "allow_unrecognized_csi": false, 00:21:41.223 "method": "bdev_nvme_attach_controller", 00:21:41.223 "req_id": 1 00:21:41.223 } 00:21:41.223 Got JSON-RPC error response 00:21:41.223 response: 00:21:41.223 { 00:21:41.223 "code": -5, 00:21:41.223 "message": "Input/output error" 00:21:41.223 } 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:41.223 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:42.598 nvme0n1 00:21:42.856 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:42.856 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:42.856 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.113 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.113 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.113 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.377 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.377 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.377 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.378 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.378 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:43.378 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:43.378 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:43.646 nvme0n1 00:21:43.646 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:43.646 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:43.646 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.902 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.902 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.902 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.159 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:44.159 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.159 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: '' 2s 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: ]] 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjczNjM5ZmIxZWM5NjViZjg3MzgzZmU2MWI2NDIxNWVStEFb: 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:44.416 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: 2s 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: ]] 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2Q3MGM0NmE1MDA3MzdiODI2MWI3NDg3NjIxNjkyNGE3ZWFlMmE1ZTRkY2YyMzE1HK8+1Q==: 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:46.314 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:48.215 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:48.215 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:48.215 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:48.215 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:48.215 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:48.215 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:48.473 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:49.845 nvme0n1 00:21:49.845 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:49.845 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.845 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.845 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.845 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:49.845 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:50.778 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:50.778 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:50.778 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:51.037 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:51.296 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:51.296 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.296 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:51.555 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.555 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:51.555 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.555 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:51.813 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:52.748 request: 00:21:52.748 { 00:21:52.748 "name": "nvme0", 00:21:52.748 "dhchap_key": "key1", 00:21:52.748 "dhchap_ctrlr_key": "key3", 00:21:52.748 "method": "bdev_nvme_set_keys", 00:21:52.748 "req_id": 1 00:21:52.748 } 00:21:52.748 Got JSON-RPC error response 00:21:52.748 response: 00:21:52.748 { 00:21:52.748 "code": -13, 00:21:52.748 "message": "Permission denied" 00:21:52.748 } 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.748 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:53.006 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:53.006 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:53.940 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:53.940 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:53.940 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.199 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:54.199 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:55.135 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:55.135 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:55.135 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.394 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:55.395 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:57.299 nvme0n1 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.299 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:57.866 request: 00:21:57.866 { 00:21:57.866 "name": "nvme0", 00:21:57.866 "dhchap_key": "key2", 00:21:57.866 "dhchap_ctrlr_key": "key0", 00:21:57.866 "method": "bdev_nvme_set_keys", 00:21:57.866 "req_id": 1 00:21:57.866 } 00:21:57.866 Got JSON-RPC error response 00:21:57.866 response: 00:21:57.866 { 00:21:57.866 "code": -13, 00:21:57.866 "message": "Permission denied" 00:21:57.866 } 00:21:57.866 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.866 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.867 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.867 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.867 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:57.867 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:57.867 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.520 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:58.520 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:59.462 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:59.462 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:59.462 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.462 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:59.462 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2470473 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2470473 ']' 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2470473 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470473 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470473' 00:22:00.836 killing process with pid 2470473 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2470473 00:22:00.836 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2470473 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.363 rmmod nvme_tcp 00:22:03.363 rmmod nvme_fabrics 00:22:03.363 rmmod nvme_keyring 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2493858 ']' 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2493858 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2493858 ']' 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2493858 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2493858 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2493858' 00:22:03.363 killing process with pid 2493858 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2493858 00:22:03.363 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2493858 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.297 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.xcL /tmp/spdk.key-sha256.Vpu /tmp/spdk.key-sha384.XhG /tmp/spdk.key-sha512.pTo /tmp/spdk.key-sha512.SAU /tmp/spdk.key-sha384.bK4 /tmp/spdk.key-sha256.3kT '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:06.832 00:22:06.832 real 3m48.780s 00:22:06.832 user 8m50.843s 00:22:06.832 sys 0m26.993s 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.832 ************************************ 00:22:06.832 END TEST nvmf_auth_target 00:22:06.832 ************************************ 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.832 ************************************ 00:22:06.832 START TEST nvmf_bdevio_no_huge 00:22:06.832 ************************************ 00:22:06.832 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:06.832 * Looking for test storage... 00:22:06.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:06.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.833 --rc genhtml_branch_coverage=1 00:22:06.833 --rc genhtml_function_coverage=1 00:22:06.833 --rc genhtml_legend=1 00:22:06.833 --rc geninfo_all_blocks=1 00:22:06.833 --rc geninfo_unexecuted_blocks=1 00:22:06.833 00:22:06.833 ' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:06.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.833 --rc genhtml_branch_coverage=1 00:22:06.833 --rc genhtml_function_coverage=1 00:22:06.833 --rc genhtml_legend=1 00:22:06.833 --rc geninfo_all_blocks=1 00:22:06.833 --rc geninfo_unexecuted_blocks=1 00:22:06.833 00:22:06.833 ' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:06.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.833 --rc genhtml_branch_coverage=1 00:22:06.833 --rc genhtml_function_coverage=1 00:22:06.833 --rc genhtml_legend=1 00:22:06.833 --rc geninfo_all_blocks=1 00:22:06.833 --rc geninfo_unexecuted_blocks=1 00:22:06.833 00:22:06.833 ' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:06.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.833 --rc genhtml_branch_coverage=1 00:22:06.833 --rc genhtml_function_coverage=1 00:22:06.833 --rc genhtml_legend=1 00:22:06.833 --rc geninfo_all_blocks=1 00:22:06.833 --rc geninfo_unexecuted_blocks=1 00:22:06.833 00:22:06.833 ' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.833 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.834 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:08.738 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:08.738 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:08.738 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.738 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:08.739 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:22:08.739 00:22:08.739 --- 10.0.0.2 ping statistics --- 00:22:08.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.739 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:22:08.739 00:22:08.739 --- 10.0.0.1 ping statistics --- 00:22:08.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.739 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2500651 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2500651 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2500651 ']' 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.739 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.739 [2024-11-28 18:24:20.834620] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:08.739 [2024-11-28 18:24:20.834798] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:08.997 [2024-11-28 18:24:21.031861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.255 [2024-11-28 18:24:21.187760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.255 [2024-11-28 18:24:21.187849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.255 [2024-11-28 18:24:21.187879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.255 [2024-11-28 18:24:21.187908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.255 [2024-11-28 18:24:21.187928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.255 [2024-11-28 18:24:21.190131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.255 [2024-11-28 18:24:21.190201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:09.255 [2024-11-28 18:24:21.190256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.255 [2024-11-28 18:24:21.190262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.822 [2024-11-28 18:24:21.792460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.822 Malloc0 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:09.822 [2024-11-28 18:24:21.882238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.822 { 00:22:09.822 "params": { 00:22:09.822 "name": "Nvme$subsystem", 00:22:09.822 "trtype": "$TEST_TRANSPORT", 00:22:09.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.822 "adrfam": "ipv4", 00:22:09.822 "trsvcid": "$NVMF_PORT", 00:22:09.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.822 "hdgst": ${hdgst:-false}, 00:22:09.822 "ddgst": ${ddgst:-false} 00:22:09.822 }, 00:22:09.822 "method": "bdev_nvme_attach_controller" 00:22:09.822 } 00:22:09.822 EOF 00:22:09.822 )") 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:09.822 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:09.822 "params": { 00:22:09.822 "name": "Nvme1", 00:22:09.822 "trtype": "tcp", 00:22:09.822 "traddr": "10.0.0.2", 00:22:09.822 "adrfam": "ipv4", 00:22:09.822 "trsvcid": "4420", 00:22:09.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.822 "hdgst": false, 00:22:09.822 "ddgst": false 00:22:09.822 }, 00:22:09.822 "method": "bdev_nvme_attach_controller" 00:22:09.822 }' 00:22:10.080 [2024-11-28 18:24:21.966174] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:10.080 [2024-11-28 18:24:21.966307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2500814 ] 00:22:10.080 [2024-11-28 18:24:22.120251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:10.338 [2024-11-28 18:24:22.263673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.338 [2024-11-28 18:24:22.263719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.338 [2024-11-28 18:24:22.263730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.905 I/O targets: 00:22:10.905 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:10.905 00:22:10.905 00:22:10.905 CUnit - A unit testing framework for C - Version 2.1-3 00:22:10.905 http://cunit.sourceforge.net/ 00:22:10.905 00:22:10.905 00:22:10.905 Suite: bdevio tests on: Nvme1n1 00:22:10.905 Test: blockdev write read block ...passed 00:22:10.905 Test: blockdev write zeroes read block ...passed 00:22:10.905 Test: blockdev write zeroes read no split ...passed 00:22:11.163 Test: blockdev write zeroes read split ...passed 00:22:11.163 Test: blockdev write zeroes read split partial ...passed 00:22:11.163 Test: blockdev reset ...[2024-11-28 18:24:23.088006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:11.163 [2024-11-28 18:24:23.088197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:22:11.163 [2024-11-28 18:24:23.197011] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:11.163 passed 00:22:11.163 Test: blockdev write read 8 blocks ...passed 00:22:11.163 Test: blockdev write read size > 128k ...passed 00:22:11.163 Test: blockdev write read invalid size ...passed 00:22:11.163 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:11.163 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:11.163 Test: blockdev write read max offset ...passed 00:22:11.422 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:11.422 Test: blockdev writev readv 8 blocks ...passed 00:22:11.422 Test: blockdev writev readv 30 x 1block ...passed 00:22:11.422 Test: blockdev writev readv block ...passed 00:22:11.422 Test: blockdev writev readv size > 128k ...passed 00:22:11.422 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:11.422 Test: blockdev comparev and writev ...[2024-11-28 18:24:23.373956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.374031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.374070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.374105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.374539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.374572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.374605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.374630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.375065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.375105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.375141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.375165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.375625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.375657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.375689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:11.422 [2024-11-28 18:24:23.375714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:11.422 passed 00:22:11.422 Test: blockdev nvme passthru rw ...passed 00:22:11.422 Test: blockdev nvme passthru vendor specific ...[2024-11-28 18:24:23.459500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.422 [2024-11-28 18:24:23.459555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.459781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.422 [2024-11-28 18:24:23.459814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.460013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.422 [2024-11-28 18:24:23.460045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:11.422 [2024-11-28 18:24:23.460255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.422 [2024-11-28 18:24:23.460287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:11.422 passed 00:22:11.422 Test: blockdev nvme admin passthru ...passed 00:22:11.681 Test: blockdev copy ...passed 00:22:11.681 00:22:11.681 Run Summary: Type Total Ran Passed Failed Inactive 00:22:11.681 suites 1 1 n/a 0 0 00:22:11.681 tests 23 23 23 0 0 00:22:11.681 asserts 152 152 152 0 n/a 00:22:11.681 00:22:11.681 Elapsed time = 1.264 seconds 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.248 rmmod nvme_tcp 00:22:12.248 rmmod nvme_fabrics 00:22:12.248 rmmod nvme_keyring 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2500651 ']' 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2500651 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2500651 ']' 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2500651 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500651 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500651' 00:22:12.248 killing process with pid 2500651 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2500651 00:22:12.248 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2500651 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.184 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.086 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.086 00:22:15.086 real 0m8.750s 00:22:15.086 user 0m20.469s 00:22:15.086 sys 0m2.888s 00:22:15.086 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.087 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:15.087 ************************************ 00:22:15.087 END TEST nvmf_bdevio_no_huge 00:22:15.087 ************************************ 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:15.346 ************************************ 00:22:15.346 START TEST nvmf_tls 00:22:15.346 ************************************ 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:15.346 * Looking for test storage... 00:22:15.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.346 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.347 --rc genhtml_branch_coverage=1 00:22:15.347 --rc genhtml_function_coverage=1 00:22:15.347 --rc genhtml_legend=1 00:22:15.347 --rc geninfo_all_blocks=1 00:22:15.347 --rc geninfo_unexecuted_blocks=1 00:22:15.347 00:22:15.347 ' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.347 --rc genhtml_branch_coverage=1 00:22:15.347 --rc genhtml_function_coverage=1 00:22:15.347 --rc genhtml_legend=1 00:22:15.347 --rc geninfo_all_blocks=1 00:22:15.347 --rc geninfo_unexecuted_blocks=1 00:22:15.347 00:22:15.347 ' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.347 --rc genhtml_branch_coverage=1 00:22:15.347 --rc genhtml_function_coverage=1 00:22:15.347 --rc genhtml_legend=1 00:22:15.347 --rc geninfo_all_blocks=1 00:22:15.347 --rc geninfo_unexecuted_blocks=1 00:22:15.347 00:22:15.347 ' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:15.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.347 --rc genhtml_branch_coverage=1 00:22:15.347 --rc genhtml_function_coverage=1 00:22:15.347 --rc genhtml_legend=1 00:22:15.347 --rc geninfo_all_blocks=1 00:22:15.347 --rc geninfo_unexecuted_blocks=1 00:22:15.347 00:22:15.347 ' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.347 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.250 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:17.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:17.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:17.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:17.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.251 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:22:17.510 00:22:17.510 --- 10.0.0.2 ping statistics --- 00:22:17.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.510 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:22:17.510 00:22:17.510 --- 10.0.0.1 ping statistics --- 00:22:17.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.510 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2503142 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2503142 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2503142 ']' 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.510 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.510 [2024-11-28 18:24:29.547649] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:17.510 [2024-11-28 18:24:29.547797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.769 [2024-11-28 18:24:29.697873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.769 [2024-11-28 18:24:29.833089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.769 [2024-11-28 18:24:29.833195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.769 [2024-11-28 18:24:29.833224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.769 [2024-11-28 18:24:29.833249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.769 [2024-11-28 18:24:29.833271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.769 [2024-11-28 18:24:29.834971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:18.704 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:18.704 true 00:22:18.963 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:18.963 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:19.220 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:19.220 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:19.220 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:19.477 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.477 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:19.735 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:19.735 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:19.735 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:19.993 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:19.993 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:20.252 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:20.252 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:20.252 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.252 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:20.510 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:20.510 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:20.510 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:20.769 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:20.769 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:21.028 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:21.028 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:21.028 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:21.287 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:21.287 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:21.546 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.dRYHt1peap 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.57yj6NqKkl 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dRYHt1peap 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.57yj6NqKkl 00:22:21.804 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:22.062 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:22.628 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.dRYHt1peap 00:22:22.628 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dRYHt1peap 00:22:22.628 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:22.887 [2024-11-28 18:24:34.910271] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.887 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:23.145 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:23.402 [2024-11-28 18:24:35.467828] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.402 [2024-11-28 18:24:35.468201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.402 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:23.968 malloc0 00:22:23.968 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:24.226 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dRYHt1peap 00:22:24.485 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:24.743 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.dRYHt1peap 00:22:34.795 Initializing NVMe Controllers 00:22:34.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.795 Initialization complete. Launching workers. 00:22:34.795 ======================================================== 00:22:34.795 Latency(us) 00:22:34.795 Device Information : IOPS MiB/s Average min max 00:22:34.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5409.56 21.13 11836.15 2273.13 13507.99 00:22:34.795 ======================================================== 00:22:34.795 Total : 5409.56 21.13 11836.15 2273.13 13507.99 00:22:34.795 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dRYHt1peap 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dRYHt1peap 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2505171 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2505171 /var/tmp/bdevperf.sock 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2505171 ']' 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.795 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.053 [2024-11-28 18:24:46.962181] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:35.053 [2024-11-28 18:24:46.962324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505171 ] 00:22:35.053 [2024-11-28 18:24:47.097480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.311 [2024-11-28 18:24:47.221868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.903 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.903 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:35.903 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dRYHt1peap 00:22:36.161 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:36.419 [2024-11-28 18:24:48.456412] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.677 TLSTESTn1 00:22:36.677 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:36.677 Running I/O for 10 seconds... 00:22:38.990 2622.00 IOPS, 10.24 MiB/s [2024-11-28T17:24:52.022Z] 2667.00 IOPS, 10.42 MiB/s [2024-11-28T17:24:52.965Z] 2689.67 IOPS, 10.51 MiB/s [2024-11-28T17:24:53.899Z] 2697.75 IOPS, 10.54 MiB/s [2024-11-28T17:24:54.834Z] 2705.00 IOPS, 10.57 MiB/s [2024-11-28T17:24:55.766Z] 2706.33 IOPS, 10.57 MiB/s [2024-11-28T17:24:56.701Z] 2708.57 IOPS, 10.58 MiB/s [2024-11-28T17:24:58.076Z] 2708.62 IOPS, 10.58 MiB/s [2024-11-28T17:24:59.010Z] 2712.22 IOPS, 10.59 MiB/s [2024-11-28T17:24:59.010Z] 2710.60 IOPS, 10.59 MiB/s 00:22:46.910 Latency(us) 00:22:46.910 [2024-11-28T17:24:59.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.910 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:46.910 Verification LBA range: start 0x0 length 0x2000 00:22:46.910 TLSTESTn1 : 10.03 2716.22 10.61 0.00 0.00 47036.38 8835.22 40389.59 00:22:46.910 [2024-11-28T17:24:59.010Z] =================================================================================================================== 00:22:46.910 [2024-11-28T17:24:59.010Z] Total : 2716.22 10.61 0.00 0.00 47036.38 8835.22 40389.59 00:22:46.910 { 00:22:46.910 "results": [ 00:22:46.910 { 00:22:46.910 "job": "TLSTESTn1", 00:22:46.910 "core_mask": "0x4", 00:22:46.910 "workload": "verify", 00:22:46.910 "status": "finished", 00:22:46.910 "verify_range": { 00:22:46.910 "start": 0, 00:22:46.910 "length": 8192 00:22:46.910 }, 00:22:46.910 "queue_depth": 128, 00:22:46.910 "io_size": 4096, 00:22:46.910 "runtime": 10.025715, 00:22:46.910 "iops": 2716.2152524782523, 00:22:46.910 "mibps": 10.610215829993173, 00:22:46.910 "io_failed": 0, 00:22:46.910 "io_timeout": 0, 00:22:46.910 "avg_latency_us": 47036.37865343604, 00:22:46.910 "min_latency_us": 8835.223703703703, 00:22:46.910 "max_latency_us": 40389.59407407408 00:22:46.910 } 00:22:46.910 ], 00:22:46.910 "core_count": 1 00:22:46.910 } 00:22:46.910 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:46.910 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2505171 00:22:46.910 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2505171 ']' 00:22:46.910 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2505171 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505171 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505171' 00:22:46.911 killing process with pid 2505171 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2505171 00:22:46.911 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.911 00:22:46.911 Latency(us) 00:22:46.911 [2024-11-28T17:24:59.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.911 [2024-11-28T17:24:59.011Z] =================================================================================================================== 00:22:46.911 [2024-11-28T17:24:59.011Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.911 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2505171 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.57yj6NqKkl 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.57yj6NqKkl 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.57yj6NqKkl 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.57yj6NqKkl 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2506627 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2506627 /var/tmp/bdevperf.sock 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2506627 ']' 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.847 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.847 [2024-11-28 18:24:59.684921] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:47.847 [2024-11-28 18:24:59.685061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506627 ] 00:22:47.847 [2024-11-28 18:24:59.824287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.106 [2024-11-28 18:24:59.948809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.672 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.673 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:48.673 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.57yj6NqKkl 00:22:48.930 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:49.188 [2024-11-28 18:25:01.260020] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.188 [2024-11-28 18:25:01.274399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:49.188 [2024-11-28 18:25:01.274938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:49.188 [2024-11-28 18:25:01.275901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:49.188 [2024-11-28 18:25:01.276892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:49.188 [2024-11-28 18:25:01.276925] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:49.188 [2024-11-28 18:25:01.276965] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:49.188 [2024-11-28 18:25:01.277000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:49.188 request: 00:22:49.188 { 00:22:49.188 "name": "TLSTEST", 00:22:49.188 "trtype": "tcp", 00:22:49.188 "traddr": "10.0.0.2", 00:22:49.188 "adrfam": "ipv4", 00:22:49.188 "trsvcid": "4420", 00:22:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.188 "prchk_reftag": false, 00:22:49.188 "prchk_guard": false, 00:22:49.188 "hdgst": false, 00:22:49.188 "ddgst": false, 00:22:49.188 "psk": "key0", 00:22:49.188 "allow_unrecognized_csi": false, 00:22:49.188 "method": "bdev_nvme_attach_controller", 00:22:49.188 "req_id": 1 00:22:49.188 } 00:22:49.188 Got JSON-RPC error response 00:22:49.188 response: 00:22:49.188 { 00:22:49.188 "code": -5, 00:22:49.188 "message": "Input/output error" 00:22:49.188 } 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2506627 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2506627 ']' 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2506627 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506627 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506627' 00:22:49.446 killing process with pid 2506627 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2506627 00:22:49.446 Received shutdown signal, test time was about 10.000000 seconds 00:22:49.446 00:22:49.446 Latency(us) 00:22:49.446 [2024-11-28T17:25:01.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.446 [2024-11-28T17:25:01.546Z] =================================================================================================================== 00:22:49.446 [2024-11-28T17:25:01.546Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.446 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2506627 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dRYHt1peap 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dRYHt1peap 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dRYHt1peap 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dRYHt1peap 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2506909 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2506909 /var/tmp/bdevperf.sock 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2506909 ']' 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.381 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.381 [2024-11-28 18:25:02.271754] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:50.381 [2024-11-28 18:25:02.271903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506909 ] 00:22:50.381 [2024-11-28 18:25:02.405950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.639 [2024-11-28 18:25:02.527218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.206 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.206 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:51.206 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dRYHt1peap 00:22:51.464 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:51.723 [2024-11-28 18:25:03.804028] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.723 [2024-11-28 18:25:03.813934] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:51.723 [2024-11-28 18:25:03.813972] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:51.723 [2024-11-28 18:25:03.814058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:51.723 [2024-11-28 18:25:03.815041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:51.723 [2024-11-28 18:25:03.816019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:51.723 [2024-11-28 18:25:03.817013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:51.723 [2024-11-28 18:25:03.817060] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:51.723 [2024-11-28 18:25:03.817100] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:51.723 [2024-11-28 18:25:03.817156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:51.723 request: 00:22:51.723 { 00:22:51.723 "name": "TLSTEST", 00:22:51.723 "trtype": "tcp", 00:22:51.723 "traddr": "10.0.0.2", 00:22:51.723 "adrfam": "ipv4", 00:22:51.723 "trsvcid": "4420", 00:22:51.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.723 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.723 "prchk_reftag": false, 00:22:51.723 "prchk_guard": false, 00:22:51.723 "hdgst": false, 00:22:51.723 "ddgst": false, 00:22:51.723 "psk": "key0", 00:22:51.723 "allow_unrecognized_csi": false, 00:22:51.723 "method": "bdev_nvme_attach_controller", 00:22:51.723 "req_id": 1 00:22:51.723 } 00:22:51.723 Got JSON-RPC error response 00:22:51.723 response: 00:22:51.723 { 00:22:51.723 "code": -5, 00:22:51.723 "message": "Input/output error" 00:22:51.723 } 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2506909 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2506909 ']' 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2506909 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506909 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506909' 00:22:51.981 killing process with pid 2506909 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2506909 00:22:51.981 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.981 00:22:51.981 Latency(us) 00:22:51.981 [2024-11-28T17:25:04.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.981 [2024-11-28T17:25:04.081Z] =================================================================================================================== 00:22:51.981 [2024-11-28T17:25:04.081Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.981 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2506909 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dRYHt1peap 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dRYHt1peap 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dRYHt1peap 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dRYHt1peap 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2507189 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2507189 /var/tmp/bdevperf.sock 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2507189 ']' 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.916 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.916 [2024-11-28 18:25:04.746313] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:52.916 [2024-11-28 18:25:04.746458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507189 ] 00:22:52.916 [2024-11-28 18:25:04.881516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.916 [2024-11-28 18:25:05.001557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.867 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.867 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.867 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dRYHt1peap 00:22:54.124 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.382 [2024-11-28 18:25:06.248932] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.382 [2024-11-28 18:25:06.259041] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.382 [2024-11-28 18:25:06.259104] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:54.382 [2024-11-28 18:25:06.259163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:54.382 [2024-11-28 18:25:06.260149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:22:54.382 [2024-11-28 18:25:06.261126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:22:54.382 [2024-11-28 18:25:06.262120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:54.382 [2024-11-28 18:25:06.262155] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:54.383 [2024-11-28 18:25:06.262180] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:54.383 [2024-11-28 18:25:06.262211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:54.383 request: 00:22:54.383 { 00:22:54.383 "name": "TLSTEST", 00:22:54.383 "trtype": "tcp", 00:22:54.383 "traddr": "10.0.0.2", 00:22:54.383 "adrfam": "ipv4", 00:22:54.383 "trsvcid": "4420", 00:22:54.383 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:54.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.383 "prchk_reftag": false, 00:22:54.383 "prchk_guard": false, 00:22:54.383 "hdgst": false, 00:22:54.383 "ddgst": false, 00:22:54.383 "psk": "key0", 00:22:54.383 "allow_unrecognized_csi": false, 00:22:54.383 "method": "bdev_nvme_attach_controller", 00:22:54.383 "req_id": 1 00:22:54.383 } 00:22:54.383 Got JSON-RPC error response 00:22:54.383 response: 00:22:54.383 { 00:22:54.383 "code": -5, 00:22:54.383 "message": "Input/output error" 00:22:54.383 } 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2507189 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2507189 ']' 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2507189 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507189 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507189' 00:22:54.383 killing process with pid 2507189 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2507189 00:22:54.383 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.383 00:22:54.383 Latency(us) 00:22:54.383 [2024-11-28T17:25:06.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.383 [2024-11-28T17:25:06.483Z] =================================================================================================================== 00:22:54.383 [2024-11-28T17:25:06.483Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.383 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2507189 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:55.313 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2507502 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2507502 /var/tmp/bdevperf.sock 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2507502 ']' 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.314 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.314 [2024-11-28 18:25:07.224544] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:55.314 [2024-11-28 18:25:07.224712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507502 ] 00:22:55.314 [2024-11-28 18:25:07.379475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.571 [2024-11-28 18:25:07.506800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.505 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.505 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.505 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:56.505 [2024-11-28 18:25:08.549754] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:56.505 [2024-11-28 18:25:08.549808] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:56.505 request: 00:22:56.505 { 00:22:56.505 "name": "key0", 00:22:56.505 "path": "", 00:22:56.505 "method": "keyring_file_add_key", 00:22:56.505 "req_id": 1 00:22:56.505 } 00:22:56.505 Got JSON-RPC error response 00:22:56.505 response: 00:22:56.505 { 00:22:56.505 "code": -1, 00:22:56.505 "message": "Operation not permitted" 00:22:56.505 } 00:22:56.505 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:57.072 [2024-11-28 18:25:08.870790] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.072 [2024-11-28 18:25:08.870870] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:57.072 request: 00:22:57.072 { 00:22:57.072 "name": "TLSTEST", 00:22:57.072 "trtype": "tcp", 00:22:57.072 "traddr": "10.0.0.2", 00:22:57.072 "adrfam": "ipv4", 00:22:57.072 "trsvcid": "4420", 00:22:57.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.072 "prchk_reftag": false, 00:22:57.072 "prchk_guard": false, 00:22:57.072 "hdgst": false, 00:22:57.072 "ddgst": false, 00:22:57.072 "psk": "key0", 00:22:57.072 "allow_unrecognized_csi": false, 00:22:57.072 "method": "bdev_nvme_attach_controller", 00:22:57.072 "req_id": 1 00:22:57.072 } 00:22:57.072 Got JSON-RPC error response 00:22:57.072 response: 00:22:57.072 { 00:22:57.072 "code": -126, 00:22:57.072 "message": "Required key not available" 00:22:57.072 } 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2507502 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2507502 ']' 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2507502 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507502 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507502' 00:22:57.072 killing process with pid 2507502 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2507502 00:22:57.072 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.072 00:22:57.072 Latency(us) 00:22:57.072 [2024-11-28T17:25:09.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.072 [2024-11-28T17:25:09.172Z] =================================================================================================================== 00:22:57.072 [2024-11-28T17:25:09.172Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.072 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2507502 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2503142 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2503142 ']' 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2503142 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.639 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503142 00:22:57.897 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.897 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.897 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503142' 00:22:57.897 killing process with pid 2503142 00:22:57.897 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2503142 00:22:57.897 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2503142 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.BR75MuXkWt 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.BR75MuXkWt 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2508000 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2508000 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2508000 ']' 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.272 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.272 [2024-11-28 18:25:11.197154] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:59.272 [2024-11-28 18:25:11.197297] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.272 [2024-11-28 18:25:11.338529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.531 [2024-11-28 18:25:11.463233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.531 [2024-11-28 18:25:11.463320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.531 [2024-11-28 18:25:11.463347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.531 [2024-11-28 18:25:11.463373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.531 [2024-11-28 18:25:11.463393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.531 [2024-11-28 18:25:11.465020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.BR75MuXkWt 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BR75MuXkWt 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:00.465 [2024-11-28 18:25:12.525340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.465 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:00.723 18:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:01.289 [2024-11-28 18:25:13.115047] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.289 [2024-11-28 18:25:13.115415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.289 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.548 malloc0 00:23:01.548 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:01.806 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:02.064 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BR75MuXkWt 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BR75MuXkWt 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2508419 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2508419 /var/tmp/bdevperf.sock 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2508419 ']' 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.323 18:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.323 [2024-11-28 18:25:14.320707] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:02.323 [2024-11-28 18:25:14.320851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508419 ] 00:23:02.581 [2024-11-28 18:25:14.455550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.582 [2024-11-28 18:25:14.578185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.514 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.514 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:03.514 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:03.772 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.030 [2024-11-28 18:25:15.928478] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.030 TLSTESTn1 00:23:04.030 18:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.287 Running I/O for 10 seconds... 00:23:06.153 2477.00 IOPS, 9.68 MiB/s [2024-11-28T17:25:19.215Z] 2533.00 IOPS, 9.89 MiB/s [2024-11-28T17:25:20.168Z] 2540.67 IOPS, 9.92 MiB/s [2024-11-28T17:25:21.541Z] 2546.25 IOPS, 9.95 MiB/s [2024-11-28T17:25:22.473Z] 2549.40 IOPS, 9.96 MiB/s [2024-11-28T17:25:23.407Z] 2562.67 IOPS, 10.01 MiB/s [2024-11-28T17:25:24.340Z] 2574.29 IOPS, 10.06 MiB/s [2024-11-28T17:25:25.275Z] 2572.75 IOPS, 10.05 MiB/s [2024-11-28T17:25:26.208Z] 2573.78 IOPS, 10.05 MiB/s [2024-11-28T17:25:26.208Z] 2578.60 IOPS, 10.07 MiB/s 00:23:14.108 Latency(us) 00:23:14.108 [2024-11-28T17:25:26.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.108 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:14.108 Verification LBA range: start 0x0 length 0x2000 00:23:14.108 TLSTESTn1 : 10.03 2584.55 10.10 0.00 0.00 49433.32 10048.85 43690.67 00:23:14.108 [2024-11-28T17:25:26.208Z] =================================================================================================================== 00:23:14.108 [2024-11-28T17:25:26.208Z] Total : 2584.55 10.10 0.00 0.00 49433.32 10048.85 43690.67 00:23:14.108 { 00:23:14.108 "results": [ 00:23:14.108 { 00:23:14.108 "job": "TLSTESTn1", 00:23:14.108 "core_mask": "0x4", 00:23:14.108 "workload": "verify", 00:23:14.108 "status": "finished", 00:23:14.108 "verify_range": { 00:23:14.108 "start": 0, 00:23:14.108 "length": 8192 00:23:14.108 }, 00:23:14.108 "queue_depth": 128, 00:23:14.108 "io_size": 4096, 00:23:14.108 "runtime": 10.025721, 00:23:14.108 "iops": 2584.552273098364, 00:23:14.108 "mibps": 10.095907316790484, 00:23:14.108 "io_failed": 0, 00:23:14.108 "io_timeout": 0, 00:23:14.108 "avg_latency_us": 49433.31646301442, 00:23:14.108 "min_latency_us": 10048.853333333333, 00:23:14.108 "max_latency_us": 43690.666666666664 00:23:14.108 } 00:23:14.108 ], 00:23:14.108 "core_count": 1 00:23:14.108 } 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2508419 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2508419 ']' 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2508419 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2508419 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2508419' 00:23:14.366 killing process with pid 2508419 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2508419 00:23:14.366 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.366 00:23:14.366 Latency(us) 00:23:14.366 [2024-11-28T17:25:26.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.366 [2024-11-28T17:25:26.466Z] =================================================================================================================== 00:23:14.366 [2024-11-28T17:25:26.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.366 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2508419 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.BR75MuXkWt 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BR75MuXkWt 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BR75MuXkWt 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BR75MuXkWt 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BR75MuXkWt 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2509875 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2509875 /var/tmp/bdevperf.sock 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2509875 ']' 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.300 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.300 [2024-11-28 18:25:27.204112] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:15.300 [2024-11-28 18:25:27.204258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509875 ] 00:23:15.300 [2024-11-28 18:25:27.339048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.558 [2024-11-28 18:25:27.463996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.121 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.121 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.121 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:16.685 [2024-11-28 18:25:28.514972] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BR75MuXkWt': 0100666 00:23:16.685 [2024-11-28 18:25:28.515027] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:16.685 request: 00:23:16.685 { 00:23:16.685 "name": "key0", 00:23:16.685 "path": "/tmp/tmp.BR75MuXkWt", 00:23:16.685 "method": "keyring_file_add_key", 00:23:16.685 "req_id": 1 00:23:16.685 } 00:23:16.685 Got JSON-RPC error response 00:23:16.685 response: 00:23:16.685 { 00:23:16.685 "code": -1, 00:23:16.685 "message": "Operation not permitted" 00:23:16.685 } 00:23:16.685 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.685 [2024-11-28 18:25:28.775840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.685 [2024-11-28 18:25:28.775919] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:16.685 request: 00:23:16.685 { 00:23:16.685 "name": "TLSTEST", 00:23:16.685 "trtype": "tcp", 00:23:16.685 "traddr": "10.0.0.2", 00:23:16.685 "adrfam": "ipv4", 00:23:16.685 "trsvcid": "4420", 00:23:16.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.685 "prchk_reftag": false, 00:23:16.685 "prchk_guard": false, 00:23:16.685 "hdgst": false, 00:23:16.685 "ddgst": false, 00:23:16.685 "psk": "key0", 00:23:16.685 "allow_unrecognized_csi": false, 00:23:16.685 "method": "bdev_nvme_attach_controller", 00:23:16.685 "req_id": 1 00:23:16.685 } 00:23:16.685 Got JSON-RPC error response 00:23:16.685 response: 00:23:16.685 { 00:23:16.685 "code": -126, 00:23:16.685 "message": "Required key not available" 00:23:16.685 } 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2509875 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2509875 ']' 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2509875 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509875 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.943 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.944 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509875' 00:23:16.944 killing process with pid 2509875 00:23:16.944 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2509875 00:23:16.944 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.944 00:23:16.944 Latency(us) 00:23:16.944 [2024-11-28T17:25:29.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.944 [2024-11-28T17:25:29.044Z] =================================================================================================================== 00:23:16.944 [2024-11-28T17:25:29.044Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.944 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2509875 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2508000 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2508000 ']' 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2508000 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2508000 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2508000' 00:23:17.877 killing process with pid 2508000 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2508000 00:23:17.877 18:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2508000 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2510301 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2510301 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2510301 ']' 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.250 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.250 [2024-11-28 18:25:31.080761] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:19.250 [2024-11-28 18:25:31.080933] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.250 [2024-11-28 18:25:31.226286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.250 [2024-11-28 18:25:31.345634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.250 [2024-11-28 18:25:31.345728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.250 [2024-11-28 18:25:31.345750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.250 [2024-11-28 18:25:31.345771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.250 [2024-11-28 18:25:31.345787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.250 [2024-11-28 18:25:31.347420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.BR75MuXkWt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.BR75MuXkWt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.BR75MuXkWt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BR75MuXkWt 00:23:20.183 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:20.441 [2024-11-28 18:25:32.350436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.441 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:20.698 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:20.956 [2024-11-28 18:25:32.912054] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.956 [2024-11-28 18:25:32.912425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.956 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:21.214 malloc0 00:23:21.214 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:21.472 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:21.731 [2024-11-28 18:25:33.759324] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BR75MuXkWt': 0100666 00:23:21.731 [2024-11-28 18:25:33.759383] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:21.731 request: 00:23:21.731 { 00:23:21.731 "name": "key0", 00:23:21.731 "path": "/tmp/tmp.BR75MuXkWt", 00:23:21.731 "method": "keyring_file_add_key", 00:23:21.731 "req_id": 1 00:23:21.731 } 00:23:21.731 Got JSON-RPC error response 00:23:21.731 response: 00:23:21.731 { 00:23:21.731 "code": -1, 00:23:21.731 "message": "Operation not permitted" 00:23:21.731 } 00:23:21.731 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.988 [2024-11-28 18:25:34.028111] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:21.989 [2024-11-28 18:25:34.028208] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:21.989 request: 00:23:21.989 { 00:23:21.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.989 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.989 "psk": "key0", 00:23:21.989 "method": "nvmf_subsystem_add_host", 00:23:21.989 "req_id": 1 00:23:21.989 } 00:23:21.989 Got JSON-RPC error response 00:23:21.989 response: 00:23:21.989 { 00:23:21.989 "code": -32603, 00:23:21.989 "message": "Internal error" 00:23:21.989 } 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2510301 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2510301 ']' 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2510301 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.989 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2510301 00:23:22.247 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:22.247 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:22.247 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2510301' 00:23:22.247 killing process with pid 2510301 00:23:22.247 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2510301 00:23:22.247 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2510301 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.BR75MuXkWt 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2510857 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2510857 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2510857 ']' 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.621 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.621 [2024-11-28 18:25:35.450126] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:23.621 [2024-11-28 18:25:35.450267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.621 [2024-11-28 18:25:35.598138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.879 [2024-11-28 18:25:35.738305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.879 [2024-11-28 18:25:35.738383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.879 [2024-11-28 18:25:35.738409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.879 [2024-11-28 18:25:35.738434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.879 [2024-11-28 18:25:35.738453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.879 [2024-11-28 18:25:35.740114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.BR75MuXkWt 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BR75MuXkWt 00:23:24.445 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.703 [2024-11-28 18:25:36.740010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.703 18:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.962 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.527 [2024-11-28 18:25:37.349741] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.527 [2024-11-28 18:25:37.350114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.527 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.785 malloc0 00:23:25.785 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.043 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:26.300 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2511270 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2511270 /var/tmp/bdevperf.sock 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2511270 ']' 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.558 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.816 [2024-11-28 18:25:38.677973] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:26.816 [2024-11-28 18:25:38.678123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511270 ] 00:23:26.816 [2024-11-28 18:25:38.816608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.074 [2024-11-28 18:25:38.936671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.639 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.639 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.639 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:27.897 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:28.155 [2024-11-28 18:25:40.163590] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.155 TLSTESTn1 00:23:28.413 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:28.671 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:28.671 "subsystems": [ 00:23:28.671 { 00:23:28.671 "subsystem": "keyring", 00:23:28.671 "config": [ 00:23:28.671 { 00:23:28.671 "method": "keyring_file_add_key", 00:23:28.671 "params": { 00:23:28.671 "name": "key0", 00:23:28.671 "path": "/tmp/tmp.BR75MuXkWt" 00:23:28.671 } 00:23:28.671 } 00:23:28.671 ] 00:23:28.671 }, 00:23:28.671 { 00:23:28.671 "subsystem": "iobuf", 00:23:28.671 "config": [ 00:23:28.671 { 00:23:28.671 "method": "iobuf_set_options", 00:23:28.671 "params": { 00:23:28.671 "small_pool_count": 8192, 00:23:28.671 "large_pool_count": 1024, 00:23:28.671 "small_bufsize": 8192, 00:23:28.671 "large_bufsize": 135168, 00:23:28.671 "enable_numa": false 00:23:28.671 } 00:23:28.671 } 00:23:28.671 ] 00:23:28.671 }, 00:23:28.671 { 00:23:28.671 "subsystem": "sock", 00:23:28.671 "config": [ 00:23:28.671 { 00:23:28.671 "method": "sock_set_default_impl", 00:23:28.671 "params": { 00:23:28.671 "impl_name": "posix" 00:23:28.671 } 00:23:28.671 }, 00:23:28.671 { 00:23:28.671 "method": "sock_impl_set_options", 00:23:28.671 "params": { 00:23:28.671 "impl_name": "ssl", 00:23:28.672 "recv_buf_size": 4096, 00:23:28.672 "send_buf_size": 4096, 00:23:28.672 "enable_recv_pipe": true, 00:23:28.672 "enable_quickack": false, 00:23:28.672 "enable_placement_id": 0, 00:23:28.672 "enable_zerocopy_send_server": true, 00:23:28.672 "enable_zerocopy_send_client": false, 00:23:28.672 "zerocopy_threshold": 0, 00:23:28.672 "tls_version": 0, 00:23:28.672 "enable_ktls": false 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "sock_impl_set_options", 00:23:28.672 "params": { 00:23:28.672 "impl_name": "posix", 00:23:28.672 "recv_buf_size": 2097152, 00:23:28.672 "send_buf_size": 2097152, 00:23:28.672 "enable_recv_pipe": true, 00:23:28.672 "enable_quickack": false, 00:23:28.672 "enable_placement_id": 0, 00:23:28.672 "enable_zerocopy_send_server": true, 00:23:28.672 "enable_zerocopy_send_client": false, 00:23:28.672 "zerocopy_threshold": 0, 00:23:28.672 "tls_version": 0, 00:23:28.672 "enable_ktls": false 00:23:28.672 } 00:23:28.672 } 00:23:28.672 ] 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "subsystem": "vmd", 00:23:28.672 "config": [] 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "subsystem": "accel", 00:23:28.672 "config": [ 00:23:28.672 { 00:23:28.672 "method": "accel_set_options", 00:23:28.672 "params": { 00:23:28.672 "small_cache_size": 128, 00:23:28.672 "large_cache_size": 16, 00:23:28.672 "task_count": 2048, 00:23:28.672 "sequence_count": 2048, 00:23:28.672 "buf_count": 2048 00:23:28.672 } 00:23:28.672 } 00:23:28.672 ] 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "subsystem": "bdev", 00:23:28.672 "config": [ 00:23:28.672 { 00:23:28.672 "method": "bdev_set_options", 00:23:28.672 "params": { 00:23:28.672 "bdev_io_pool_size": 65535, 00:23:28.672 "bdev_io_cache_size": 256, 00:23:28.672 "bdev_auto_examine": true, 00:23:28.672 "iobuf_small_cache_size": 128, 00:23:28.672 "iobuf_large_cache_size": 16 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "bdev_raid_set_options", 00:23:28.672 "params": { 00:23:28.672 "process_window_size_kb": 1024, 00:23:28.672 "process_max_bandwidth_mb_sec": 0 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "bdev_iscsi_set_options", 00:23:28.672 "params": { 00:23:28.672 "timeout_sec": 30 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "bdev_nvme_set_options", 00:23:28.672 "params": { 00:23:28.672 "action_on_timeout": "none", 00:23:28.672 "timeout_us": 0, 00:23:28.672 "timeout_admin_us": 0, 00:23:28.672 "keep_alive_timeout_ms": 10000, 00:23:28.672 "arbitration_burst": 0, 00:23:28.672 "low_priority_weight": 0, 00:23:28.672 "medium_priority_weight": 0, 00:23:28.672 "high_priority_weight": 0, 00:23:28.672 "nvme_adminq_poll_period_us": 10000, 00:23:28.672 "nvme_ioq_poll_period_us": 0, 00:23:28.672 "io_queue_requests": 0, 00:23:28.672 "delay_cmd_submit": true, 00:23:28.672 "transport_retry_count": 4, 00:23:28.672 "bdev_retry_count": 3, 00:23:28.672 "transport_ack_timeout": 0, 00:23:28.672 "ctrlr_loss_timeout_sec": 0, 00:23:28.672 "reconnect_delay_sec": 0, 00:23:28.672 "fast_io_fail_timeout_sec": 0, 00:23:28.672 "disable_auto_failback": false, 00:23:28.672 "generate_uuids": false, 00:23:28.672 "transport_tos": 0, 00:23:28.672 "nvme_error_stat": false, 00:23:28.672 "rdma_srq_size": 0, 00:23:28.672 "io_path_stat": false, 00:23:28.672 "allow_accel_sequence": false, 00:23:28.672 "rdma_max_cq_size": 0, 00:23:28.672 "rdma_cm_event_timeout_ms": 0, 00:23:28.672 "dhchap_digests": [ 00:23:28.672 "sha256", 00:23:28.672 "sha384", 00:23:28.672 "sha512" 00:23:28.672 ], 00:23:28.672 "dhchap_dhgroups": [ 00:23:28.672 "null", 00:23:28.672 "ffdhe2048", 00:23:28.672 "ffdhe3072", 00:23:28.672 "ffdhe4096", 00:23:28.672 "ffdhe6144", 00:23:28.672 "ffdhe8192" 00:23:28.672 ] 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "bdev_nvme_set_hotplug", 00:23:28.672 "params": { 00:23:28.672 "period_us": 100000, 00:23:28.672 "enable": false 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "bdev_malloc_create", 00:23:28.672 "params": { 00:23:28.672 "name": "malloc0", 00:23:28.672 "num_blocks": 8192, 00:23:28.672 "block_size": 4096, 00:23:28.672 "physical_block_size": 4096, 00:23:28.672 "uuid": "9615d0d9-e621-47fc-8771-2bbbe7c9fe0d", 00:23:28.672 "optimal_io_boundary": 0, 00:23:28.672 "md_size": 0, 00:23:28.672 "dif_type": 0, 00:23:28.672 "dif_is_head_of_md": false, 00:23:28.672 "dif_pi_format": 0 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "bdev_wait_for_examine" 00:23:28.672 } 00:23:28.672 ] 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "subsystem": "nbd", 00:23:28.672 "config": [] 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "subsystem": "scheduler", 00:23:28.672 "config": [ 00:23:28.672 { 00:23:28.672 "method": "framework_set_scheduler", 00:23:28.672 "params": { 00:23:28.672 "name": "static" 00:23:28.672 } 00:23:28.672 } 00:23:28.672 ] 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "subsystem": "nvmf", 00:23:28.672 "config": [ 00:23:28.672 { 00:23:28.672 "method": "nvmf_set_config", 00:23:28.672 "params": { 00:23:28.672 "discovery_filter": "match_any", 00:23:28.672 "admin_cmd_passthru": { 00:23:28.672 "identify_ctrlr": false 00:23:28.672 }, 00:23:28.672 "dhchap_digests": [ 00:23:28.672 "sha256", 00:23:28.672 "sha384", 00:23:28.672 "sha512" 00:23:28.672 ], 00:23:28.672 "dhchap_dhgroups": [ 00:23:28.672 "null", 00:23:28.672 "ffdhe2048", 00:23:28.672 "ffdhe3072", 00:23:28.672 "ffdhe4096", 00:23:28.672 "ffdhe6144", 00:23:28.672 "ffdhe8192" 00:23:28.672 ] 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "nvmf_set_max_subsystems", 00:23:28.672 "params": { 00:23:28.672 "max_subsystems": 1024 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "nvmf_set_crdt", 00:23:28.672 "params": { 00:23:28.672 "crdt1": 0, 00:23:28.672 "crdt2": 0, 00:23:28.672 "crdt3": 0 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "nvmf_create_transport", 00:23:28.672 "params": { 00:23:28.672 "trtype": "TCP", 00:23:28.672 "max_queue_depth": 128, 00:23:28.672 "max_io_qpairs_per_ctrlr": 127, 00:23:28.672 "in_capsule_data_size": 4096, 00:23:28.672 "max_io_size": 131072, 00:23:28.672 "io_unit_size": 131072, 00:23:28.672 "max_aq_depth": 128, 00:23:28.672 "num_shared_buffers": 511, 00:23:28.672 "buf_cache_size": 4294967295, 00:23:28.672 "dif_insert_or_strip": false, 00:23:28.672 "zcopy": false, 00:23:28.672 "c2h_success": false, 00:23:28.672 "sock_priority": 0, 00:23:28.672 "abort_timeout_sec": 1, 00:23:28.672 "ack_timeout": 0, 00:23:28.672 "data_wr_pool_size": 0 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "nvmf_create_subsystem", 00:23:28.672 "params": { 00:23:28.672 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.672 "allow_any_host": false, 00:23:28.672 "serial_number": "SPDK00000000000001", 00:23:28.672 "model_number": "SPDK bdev Controller", 00:23:28.672 "max_namespaces": 10, 00:23:28.672 "min_cntlid": 1, 00:23:28.672 "max_cntlid": 65519, 00:23:28.672 "ana_reporting": false 00:23:28.672 } 00:23:28.672 }, 00:23:28.672 { 00:23:28.672 "method": "nvmf_subsystem_add_host", 00:23:28.672 "params": { 00:23:28.672 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.672 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.673 "psk": "key0" 00:23:28.673 } 00:23:28.673 }, 00:23:28.673 { 00:23:28.673 "method": "nvmf_subsystem_add_ns", 00:23:28.673 "params": { 00:23:28.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.673 "namespace": { 00:23:28.673 "nsid": 1, 00:23:28.673 "bdev_name": "malloc0", 00:23:28.673 "nguid": "9615D0D9E62147FC87712BBBE7C9FE0D", 00:23:28.673 "uuid": "9615d0d9-e621-47fc-8771-2bbbe7c9fe0d", 00:23:28.673 "no_auto_visible": false 00:23:28.673 } 00:23:28.673 } 00:23:28.673 }, 00:23:28.673 { 00:23:28.673 "method": "nvmf_subsystem_add_listener", 00:23:28.673 "params": { 00:23:28.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.673 "listen_address": { 00:23:28.673 "trtype": "TCP", 00:23:28.673 "adrfam": "IPv4", 00:23:28.673 "traddr": "10.0.0.2", 00:23:28.673 "trsvcid": "4420" 00:23:28.673 }, 00:23:28.673 "secure_channel": true 00:23:28.673 } 00:23:28.673 } 00:23:28.673 ] 00:23:28.673 } 00:23:28.673 ] 00:23:28.673 }' 00:23:28.673 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:28.931 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:28.931 "subsystems": [ 00:23:28.931 { 00:23:28.931 "subsystem": "keyring", 00:23:28.931 "config": [ 00:23:28.931 { 00:23:28.931 "method": "keyring_file_add_key", 00:23:28.931 "params": { 00:23:28.931 "name": "key0", 00:23:28.931 "path": "/tmp/tmp.BR75MuXkWt" 00:23:28.931 } 00:23:28.931 } 00:23:28.931 ] 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "subsystem": "iobuf", 00:23:28.931 "config": [ 00:23:28.931 { 00:23:28.931 "method": "iobuf_set_options", 00:23:28.931 "params": { 00:23:28.931 "small_pool_count": 8192, 00:23:28.931 "large_pool_count": 1024, 00:23:28.931 "small_bufsize": 8192, 00:23:28.931 "large_bufsize": 135168, 00:23:28.931 "enable_numa": false 00:23:28.931 } 00:23:28.931 } 00:23:28.931 ] 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "subsystem": "sock", 00:23:28.931 "config": [ 00:23:28.931 { 00:23:28.931 "method": "sock_set_default_impl", 00:23:28.931 "params": { 00:23:28.931 "impl_name": "posix" 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "sock_impl_set_options", 00:23:28.931 "params": { 00:23:28.931 "impl_name": "ssl", 00:23:28.931 "recv_buf_size": 4096, 00:23:28.931 "send_buf_size": 4096, 00:23:28.931 "enable_recv_pipe": true, 00:23:28.931 "enable_quickack": false, 00:23:28.931 "enable_placement_id": 0, 00:23:28.931 "enable_zerocopy_send_server": true, 00:23:28.931 "enable_zerocopy_send_client": false, 00:23:28.931 "zerocopy_threshold": 0, 00:23:28.931 "tls_version": 0, 00:23:28.931 "enable_ktls": false 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "sock_impl_set_options", 00:23:28.931 "params": { 00:23:28.931 "impl_name": "posix", 00:23:28.931 "recv_buf_size": 2097152, 00:23:28.931 "send_buf_size": 2097152, 00:23:28.931 "enable_recv_pipe": true, 00:23:28.931 "enable_quickack": false, 00:23:28.931 "enable_placement_id": 0, 00:23:28.931 "enable_zerocopy_send_server": true, 00:23:28.931 "enable_zerocopy_send_client": false, 00:23:28.931 "zerocopy_threshold": 0, 00:23:28.931 "tls_version": 0, 00:23:28.931 "enable_ktls": false 00:23:28.931 } 00:23:28.931 } 00:23:28.931 ] 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "subsystem": "vmd", 00:23:28.931 "config": [] 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "subsystem": "accel", 00:23:28.931 "config": [ 00:23:28.931 { 00:23:28.931 "method": "accel_set_options", 00:23:28.931 "params": { 00:23:28.931 "small_cache_size": 128, 00:23:28.931 "large_cache_size": 16, 00:23:28.931 "task_count": 2048, 00:23:28.931 "sequence_count": 2048, 00:23:28.931 "buf_count": 2048 00:23:28.931 } 00:23:28.931 } 00:23:28.931 ] 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "subsystem": "bdev", 00:23:28.931 "config": [ 00:23:28.931 { 00:23:28.931 "method": "bdev_set_options", 00:23:28.931 "params": { 00:23:28.931 "bdev_io_pool_size": 65535, 00:23:28.931 "bdev_io_cache_size": 256, 00:23:28.931 "bdev_auto_examine": true, 00:23:28.931 "iobuf_small_cache_size": 128, 00:23:28.931 "iobuf_large_cache_size": 16 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "bdev_raid_set_options", 00:23:28.931 "params": { 00:23:28.931 "process_window_size_kb": 1024, 00:23:28.931 "process_max_bandwidth_mb_sec": 0 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "bdev_iscsi_set_options", 00:23:28.931 "params": { 00:23:28.931 "timeout_sec": 30 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "bdev_nvme_set_options", 00:23:28.931 "params": { 00:23:28.931 "action_on_timeout": "none", 00:23:28.931 "timeout_us": 0, 00:23:28.931 "timeout_admin_us": 0, 00:23:28.931 "keep_alive_timeout_ms": 10000, 00:23:28.931 "arbitration_burst": 0, 00:23:28.931 "low_priority_weight": 0, 00:23:28.931 "medium_priority_weight": 0, 00:23:28.931 "high_priority_weight": 0, 00:23:28.931 "nvme_adminq_poll_period_us": 10000, 00:23:28.931 "nvme_ioq_poll_period_us": 0, 00:23:28.931 "io_queue_requests": 512, 00:23:28.931 "delay_cmd_submit": true, 00:23:28.931 "transport_retry_count": 4, 00:23:28.931 "bdev_retry_count": 3, 00:23:28.931 "transport_ack_timeout": 0, 00:23:28.931 "ctrlr_loss_timeout_sec": 0, 00:23:28.931 "reconnect_delay_sec": 0, 00:23:28.931 "fast_io_fail_timeout_sec": 0, 00:23:28.931 "disable_auto_failback": false, 00:23:28.931 "generate_uuids": false, 00:23:28.931 "transport_tos": 0, 00:23:28.931 "nvme_error_stat": false, 00:23:28.931 "rdma_srq_size": 0, 00:23:28.931 "io_path_stat": false, 00:23:28.931 "allow_accel_sequence": false, 00:23:28.931 "rdma_max_cq_size": 0, 00:23:28.931 "rdma_cm_event_timeout_ms": 0, 00:23:28.931 "dhchap_digests": [ 00:23:28.931 "sha256", 00:23:28.931 "sha384", 00:23:28.931 "sha512" 00:23:28.931 ], 00:23:28.931 "dhchap_dhgroups": [ 00:23:28.931 "null", 00:23:28.931 "ffdhe2048", 00:23:28.931 "ffdhe3072", 00:23:28.931 "ffdhe4096", 00:23:28.931 "ffdhe6144", 00:23:28.931 "ffdhe8192" 00:23:28.931 ] 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "bdev_nvme_attach_controller", 00:23:28.931 "params": { 00:23:28.931 "name": "TLSTEST", 00:23:28.931 "trtype": "TCP", 00:23:28.931 "adrfam": "IPv4", 00:23:28.931 "traddr": "10.0.0.2", 00:23:28.931 "trsvcid": "4420", 00:23:28.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.931 "prchk_reftag": false, 00:23:28.931 "prchk_guard": false, 00:23:28.931 "ctrlr_loss_timeout_sec": 0, 00:23:28.931 "reconnect_delay_sec": 0, 00:23:28.931 "fast_io_fail_timeout_sec": 0, 00:23:28.931 "psk": "key0", 00:23:28.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.931 "hdgst": false, 00:23:28.931 "ddgst": false, 00:23:28.931 "multipath": "multipath" 00:23:28.931 } 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "method": "bdev_nvme_set_hotplug", 00:23:28.931 "params": { 00:23:28.931 "period_us": 100000, 00:23:28.931 "enable": false 00:23:28.931 } 00:23:28.931 }, 00:23:28.932 { 00:23:28.932 "method": "bdev_wait_for_examine" 00:23:28.932 } 00:23:28.932 ] 00:23:28.932 }, 00:23:28.932 { 00:23:28.932 "subsystem": "nbd", 00:23:28.932 "config": [] 00:23:28.932 } 00:23:28.932 ] 00:23:28.932 }' 00:23:28.932 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2511270 00:23:28.932 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2511270 ']' 00:23:28.932 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2511270 00:23:28.932 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.932 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.932 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511270 00:23:29.190 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:29.190 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:29.190 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511270' 00:23:29.190 killing process with pid 2511270 00:23:29.190 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2511270 00:23:29.190 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.190 00:23:29.190 Latency(us) 00:23:29.190 [2024-11-28T17:25:41.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.190 [2024-11-28T17:25:41.290Z] =================================================================================================================== 00:23:29.190 [2024-11-28T17:25:41.290Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.190 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2511270 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2510857 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2510857 ']' 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2510857 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2510857 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2510857' 00:23:30.125 killing process with pid 2510857 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2510857 00:23:30.125 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2510857 00:23:31.059 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:31.059 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.059 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:31.059 "subsystems": [ 00:23:31.059 { 00:23:31.059 "subsystem": "keyring", 00:23:31.059 "config": [ 00:23:31.059 { 00:23:31.059 "method": "keyring_file_add_key", 00:23:31.059 "params": { 00:23:31.059 "name": "key0", 00:23:31.059 "path": "/tmp/tmp.BR75MuXkWt" 00:23:31.059 } 00:23:31.059 } 00:23:31.059 ] 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "subsystem": "iobuf", 00:23:31.059 "config": [ 00:23:31.059 { 00:23:31.059 "method": "iobuf_set_options", 00:23:31.059 "params": { 00:23:31.059 "small_pool_count": 8192, 00:23:31.059 "large_pool_count": 1024, 00:23:31.059 "small_bufsize": 8192, 00:23:31.059 "large_bufsize": 135168, 00:23:31.059 "enable_numa": false 00:23:31.059 } 00:23:31.059 } 00:23:31.059 ] 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "subsystem": "sock", 00:23:31.059 "config": [ 00:23:31.059 { 00:23:31.059 "method": "sock_set_default_impl", 00:23:31.059 "params": { 00:23:31.059 "impl_name": "posix" 00:23:31.059 } 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "method": "sock_impl_set_options", 00:23:31.059 "params": { 00:23:31.059 "impl_name": "ssl", 00:23:31.059 "recv_buf_size": 4096, 00:23:31.059 "send_buf_size": 4096, 00:23:31.059 "enable_recv_pipe": true, 00:23:31.059 "enable_quickack": false, 00:23:31.059 "enable_placement_id": 0, 00:23:31.059 "enable_zerocopy_send_server": true, 00:23:31.059 "enable_zerocopy_send_client": false, 00:23:31.059 "zerocopy_threshold": 0, 00:23:31.059 "tls_version": 0, 00:23:31.059 "enable_ktls": false 00:23:31.059 } 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "method": "sock_impl_set_options", 00:23:31.059 "params": { 00:23:31.059 "impl_name": "posix", 00:23:31.059 "recv_buf_size": 2097152, 00:23:31.059 "send_buf_size": 2097152, 00:23:31.059 "enable_recv_pipe": true, 00:23:31.059 "enable_quickack": false, 00:23:31.059 "enable_placement_id": 0, 00:23:31.059 "enable_zerocopy_send_server": true, 00:23:31.059 "enable_zerocopy_send_client": false, 00:23:31.059 "zerocopy_threshold": 0, 00:23:31.059 "tls_version": 0, 00:23:31.059 "enable_ktls": false 00:23:31.059 } 00:23:31.059 } 00:23:31.059 ] 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "subsystem": "vmd", 00:23:31.059 "config": [] 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "subsystem": "accel", 00:23:31.059 "config": [ 00:23:31.059 { 00:23:31.059 "method": "accel_set_options", 00:23:31.059 "params": { 00:23:31.059 "small_cache_size": 128, 00:23:31.059 "large_cache_size": 16, 00:23:31.059 "task_count": 2048, 00:23:31.059 "sequence_count": 2048, 00:23:31.059 "buf_count": 2048 00:23:31.059 } 00:23:31.059 } 00:23:31.059 ] 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "subsystem": "bdev", 00:23:31.059 "config": [ 00:23:31.059 { 00:23:31.059 "method": "bdev_set_options", 00:23:31.059 "params": { 00:23:31.059 "bdev_io_pool_size": 65535, 00:23:31.059 "bdev_io_cache_size": 256, 00:23:31.059 "bdev_auto_examine": true, 00:23:31.059 "iobuf_small_cache_size": 128, 00:23:31.059 "iobuf_large_cache_size": 16 00:23:31.059 } 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "method": "bdev_raid_set_options", 00:23:31.059 "params": { 00:23:31.059 "process_window_size_kb": 1024, 00:23:31.059 "process_max_bandwidth_mb_sec": 0 00:23:31.059 } 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "method": "bdev_iscsi_set_options", 00:23:31.059 "params": { 00:23:31.059 "timeout_sec": 30 00:23:31.059 } 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "method": "bdev_nvme_set_options", 00:23:31.059 "params": { 00:23:31.059 "action_on_timeout": "none", 00:23:31.059 "timeout_us": 0, 00:23:31.059 "timeout_admin_us": 0, 00:23:31.059 "keep_alive_timeout_ms": 10000, 00:23:31.059 "arbitration_burst": 0, 00:23:31.059 "low_priority_weight": 0, 00:23:31.059 "medium_priority_weight": 0, 00:23:31.059 "high_priority_weight": 0, 00:23:31.059 "nvme_adminq_poll_period_us": 10000, 00:23:31.059 "nvme_ioq_poll_period_us": 0, 00:23:31.059 "io_queue_requests": 0, 00:23:31.059 "delay_cmd_submit": true, 00:23:31.059 "transport_retry_count": 4, 00:23:31.059 "bdev_retry_count": 3, 00:23:31.059 "transport_ack_timeout": 0, 00:23:31.059 "ctrlr_loss_timeout_sec": 0, 00:23:31.059 "reconnect_delay_sec": 0, 00:23:31.059 "fast_io_fail_timeout_sec": 0, 00:23:31.059 "disable_auto_failback": false, 00:23:31.059 "generate_uuids": false, 00:23:31.059 "transport_tos": 0, 00:23:31.059 "nvme_error_stat": false, 00:23:31.059 "rdma_srq_size": 0, 00:23:31.059 "io_path_stat": false, 00:23:31.059 "allow_accel_sequence": false, 00:23:31.059 "rdma_max_cq_size": 0, 00:23:31.059 "rdma_cm_event_timeout_ms": 0, 00:23:31.059 "dhchap_digests": [ 00:23:31.059 "sha256", 00:23:31.059 "sha384", 00:23:31.059 "sha512" 00:23:31.059 ], 00:23:31.059 "dhchap_dhgroups": [ 00:23:31.059 "null", 00:23:31.059 "ffdhe2048", 00:23:31.059 "ffdhe3072", 00:23:31.059 "ffdhe4096", 00:23:31.059 "ffdhe6144", 00:23:31.059 "ffdhe8192" 00:23:31.059 ] 00:23:31.059 } 00:23:31.059 }, 00:23:31.059 { 00:23:31.059 "method": "bdev_nvme_set_hotplug", 00:23:31.059 "params": { 00:23:31.059 "period_us": 100000, 00:23:31.059 "enable": false 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "bdev_malloc_create", 00:23:31.060 "params": { 00:23:31.060 "name": "malloc0", 00:23:31.060 "num_blocks": 8192, 00:23:31.060 "block_size": 4096, 00:23:31.060 "physical_block_size": 4096, 00:23:31.060 "uuid": "9615d0d9-e621-47fc-8771-2bbbe7c9fe0d", 00:23:31.060 "optimal_io_boundary": 0, 00:23:31.060 "md_size": 0, 00:23:31.060 "dif_type": 0, 00:23:31.060 "dif_is_head_of_md": false, 00:23:31.060 "dif_pi_format": 0 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "bdev_wait_for_examine" 00:23:31.060 } 00:23:31.060 ] 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "subsystem": "nbd", 00:23:31.060 "config": [] 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "subsystem": "scheduler", 00:23:31.060 "config": [ 00:23:31.060 { 00:23:31.060 "method": "framework_set_scheduler", 00:23:31.060 "params": { 00:23:31.060 "name": "static" 00:23:31.060 } 00:23:31.060 } 00:23:31.060 ] 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "subsystem": "nvmf", 00:23:31.060 "config": [ 00:23:31.060 { 00:23:31.060 "method": "nvmf_set_config", 00:23:31.060 "params": { 00:23:31.060 "discovery_filter": "match_any", 00:23:31.060 "admin_cmd_passthru": { 00:23:31.060 "identify_ctrlr": false 00:23:31.060 }, 00:23:31.060 "dhchap_digests": [ 00:23:31.060 "sha256", 00:23:31.060 "sha384", 00:23:31.060 "sha512" 00:23:31.060 ], 00:23:31.060 "dhchap_dhgroups": [ 00:23:31.060 "null", 00:23:31.060 "ffdhe2048", 00:23:31.060 "ffdhe3072", 00:23:31.060 "ffdhe4096", 00:23:31.060 "ffdhe6144", 00:23:31.060 "ffdhe8192" 00:23:31.060 ] 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_set_max_subsystems", 00:23:31.060 "params": { 00:23:31.060 "max_subsystems": 1024 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_set_crdt", 00:23:31.060 "params": { 00:23:31.060 "crdt1": 0, 00:23:31.060 "crdt2": 0, 00:23:31.060 "crdt3": 0 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_create_transport", 00:23:31.060 "params": { 00:23:31.060 "trtype": "TCP", 00:23:31.060 "max_queue_depth": 128, 00:23:31.060 "max_io_qpairs_per_ctrlr": 127, 00:23:31.060 "in_capsule_data_size": 4096, 00:23:31.060 "max_io_size": 131072, 00:23:31.060 "io_unit_size": 131072, 00:23:31.060 "max_aq_depth": 128, 00:23:31.060 "num_shared_buffers": 511, 00:23:31.060 "buf_cache_size": 4294967295, 00:23:31.060 "dif_insert_or_strip": false, 00:23:31.060 "zcopy": false, 00:23:31.060 "c2h_success": false, 00:23:31.060 "sock_priority": 0, 00:23:31.060 "abort_timeout_sec": 1, 00:23:31.060 "ack_timeout": 0, 00:23:31.060 "data_wr_pool_size": 0 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_create_subsystem", 00:23:31.060 "params": { 00:23:31.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.060 "allow_any_host": false, 00:23:31.060 "serial_number": "SPDK00000000000001", 00:23:31.060 "model_number": "SPDK bdev Controller", 00:23:31.060 "max_namespaces": 10, 00:23:31.060 "min_cntlid": 1, 00:23:31.060 "max_cntlid": 65519, 00:23:31.060 "ana_reporting": false 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_subsystem_add_host", 00:23:31.060 "params": { 00:23:31.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.060 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.060 "psk": "key0" 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_subsystem_add_ns", 00:23:31.060 "params": { 00:23:31.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.060 "namespace": { 00:23:31.060 "nsid": 1, 00:23:31.060 "bdev_name": "malloc0", 00:23:31.060 "nguid": "9615D0D9E62147FC87712BBBE7C9FE0D", 00:23:31.060 "uuid": "9615d0d9-e621-47fc-8771-2bbbe7c9fe0d", 00:23:31.060 "no_auto_visible": false 00:23:31.060 } 00:23:31.060 } 00:23:31.060 }, 00:23:31.060 { 00:23:31.060 "method": "nvmf_subsystem_add_listener", 00:23:31.060 "params": { 00:23:31.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.060 "listen_address": { 00:23:31.060 "trtype": "TCP", 00:23:31.060 "adrfam": "IPv4", 00:23:31.060 "traddr": "10.0.0.2", 00:23:31.060 "trsvcid": "4420" 00:23:31.060 }, 00:23:31.060 "secure_channel": true 00:23:31.060 } 00:23:31.060 } 00:23:31.060 ] 00:23:31.060 } 00:23:31.060 ] 00:23:31.060 }' 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2511818 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2511818 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2511818 ']' 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.060 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.318 [2024-11-28 18:25:43.204552] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:31.318 [2024-11-28 18:25:43.204713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.318 [2024-11-28 18:25:43.354801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.576 [2024-11-28 18:25:43.489283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.576 [2024-11-28 18:25:43.489377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.576 [2024-11-28 18:25:43.489403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.576 [2024-11-28 18:25:43.489428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.576 [2024-11-28 18:25:43.489448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.577 [2024-11-28 18:25:43.491201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.143 [2024-11-28 18:25:44.045922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.143 [2024-11-28 18:25:44.077947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:32.143 [2024-11-28 18:25:44.078290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2511971 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2511971 /var/tmp/bdevperf.sock 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2511971 ']' 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.143 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:32.143 "subsystems": [ 00:23:32.143 { 00:23:32.143 "subsystem": "keyring", 00:23:32.143 "config": [ 00:23:32.143 { 00:23:32.143 "method": "keyring_file_add_key", 00:23:32.143 "params": { 00:23:32.143 "name": "key0", 00:23:32.143 "path": "/tmp/tmp.BR75MuXkWt" 00:23:32.143 } 00:23:32.143 } 00:23:32.143 ] 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "subsystem": "iobuf", 00:23:32.143 "config": [ 00:23:32.143 { 00:23:32.143 "method": "iobuf_set_options", 00:23:32.143 "params": { 00:23:32.143 "small_pool_count": 8192, 00:23:32.143 "large_pool_count": 1024, 00:23:32.143 "small_bufsize": 8192, 00:23:32.143 "large_bufsize": 135168, 00:23:32.143 "enable_numa": false 00:23:32.143 } 00:23:32.143 } 00:23:32.143 ] 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "subsystem": "sock", 00:23:32.143 "config": [ 00:23:32.143 { 00:23:32.143 "method": "sock_set_default_impl", 00:23:32.143 "params": { 00:23:32.143 "impl_name": "posix" 00:23:32.143 } 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "method": "sock_impl_set_options", 00:23:32.143 "params": { 00:23:32.143 "impl_name": "ssl", 00:23:32.143 "recv_buf_size": 4096, 00:23:32.143 "send_buf_size": 4096, 00:23:32.143 "enable_recv_pipe": true, 00:23:32.143 "enable_quickack": false, 00:23:32.143 "enable_placement_id": 0, 00:23:32.143 "enable_zerocopy_send_server": true, 00:23:32.143 "enable_zerocopy_send_client": false, 00:23:32.143 "zerocopy_threshold": 0, 00:23:32.143 "tls_version": 0, 00:23:32.143 "enable_ktls": false 00:23:32.143 } 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "method": "sock_impl_set_options", 00:23:32.143 "params": { 00:23:32.143 "impl_name": "posix", 00:23:32.143 "recv_buf_size": 2097152, 00:23:32.143 "send_buf_size": 2097152, 00:23:32.143 "enable_recv_pipe": true, 00:23:32.143 "enable_quickack": false, 00:23:32.143 "enable_placement_id": 0, 00:23:32.143 "enable_zerocopy_send_server": true, 00:23:32.143 "enable_zerocopy_send_client": false, 00:23:32.143 "zerocopy_threshold": 0, 00:23:32.143 "tls_version": 0, 00:23:32.143 "enable_ktls": false 00:23:32.143 } 00:23:32.143 } 00:23:32.143 ] 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "subsystem": "vmd", 00:23:32.143 "config": [] 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "subsystem": "accel", 00:23:32.143 "config": [ 00:23:32.143 { 00:23:32.143 "method": "accel_set_options", 00:23:32.143 "params": { 00:23:32.143 "small_cache_size": 128, 00:23:32.143 "large_cache_size": 16, 00:23:32.143 "task_count": 2048, 00:23:32.143 "sequence_count": 2048, 00:23:32.143 "buf_count": 2048 00:23:32.143 } 00:23:32.143 } 00:23:32.143 ] 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "subsystem": "bdev", 00:23:32.143 "config": [ 00:23:32.143 { 00:23:32.143 "method": "bdev_set_options", 00:23:32.143 "params": { 00:23:32.143 "bdev_io_pool_size": 65535, 00:23:32.143 "bdev_io_cache_size": 256, 00:23:32.143 "bdev_auto_examine": true, 00:23:32.143 "iobuf_small_cache_size": 128, 00:23:32.143 "iobuf_large_cache_size": 16 00:23:32.143 } 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "method": "bdev_raid_set_options", 00:23:32.143 "params": { 00:23:32.143 "process_window_size_kb": 1024, 00:23:32.143 "process_max_bandwidth_mb_sec": 0 00:23:32.143 } 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "method": "bdev_iscsi_set_options", 00:23:32.143 "params": { 00:23:32.143 "timeout_sec": 30 00:23:32.143 } 00:23:32.143 }, 00:23:32.143 { 00:23:32.143 "method": "bdev_nvme_set_options", 00:23:32.143 "params": { 00:23:32.143 "action_on_timeout": "none", 00:23:32.143 "timeout_us": 0, 00:23:32.143 "timeout_admin_us": 0, 00:23:32.143 "keep_alive_timeout_ms": 10000, 00:23:32.143 "arbitration_burst": 0, 00:23:32.143 "low_priority_weight": 0, 00:23:32.143 "medium_priority_weight": 0, 00:23:32.143 "high_priority_weight": 0, 00:23:32.144 "nvme_adminq_poll_period_us": 10000, 00:23:32.144 "nvme_ioq_poll_period_us": 0, 00:23:32.144 "io_queue_requests": 512, 00:23:32.144 "delay_cmd_submit": true, 00:23:32.144 "transport_retry_count": 4, 00:23:32.144 "bdev_retry_count": 3, 00:23:32.144 "transport_ack_timeout": 0, 00:23:32.144 "ctrlr_loss_timeout_sec": 0, 00:23:32.144 "reconnect_delay_sec": 0, 00:23:32.144 "fast_io_fail_timeout_sec": 0, 00:23:32.144 "disable_auto_failback": false, 00:23:32.144 "generate_uuids": false, 00:23:32.144 "transport_tos": 0, 00:23:32.144 "nvme_error_stat": false, 00:23:32.144 "rdma_srq_size": 0, 00:23:32.144 "io_path_stat": false, 00:23:32.144 "allow_accel_sequence": false, 00:23:32.144 "rdma_max_cq_size": 0, 00:23:32.144 "rdma_cm_event_timeout_ms": 0, 00:23:32.144 "dhchap_digests": [ 00:23:32.144 "sha256", 00:23:32.144 "sha384", 00:23:32.144 "sha512" 00:23:32.144 ], 00:23:32.144 "dhchap_dhgroups": [ 00:23:32.144 "null", 00:23:32.144 "ffdhe2048", 00:23:32.144 "ffdhe3072", 00:23:32.144 "ffdhe4096", 00:23:32.144 "ffdhe6144", 00:23:32.144 "ffdhe8192" 00:23:32.144 ] 00:23:32.144 } 00:23:32.144 }, 00:23:32.144 { 00:23:32.144 "method": "bdev_nvme_attach_controller", 00:23:32.144 "params": { 00:23:32.144 "name": "TLSTEST", 00:23:32.144 "trtype": "TCP", 00:23:32.144 "adrfam": "IPv4", 00:23:32.144 "traddr": "10.0.0.2", 00:23:32.144 "trsvcid": "4420", 00:23:32.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.144 "prchk_reftag": false, 00:23:32.144 "prchk_guard": false, 00:23:32.144 "ctrlr_loss_timeout_sec": 0, 00:23:32.144 "reconnect_delay_sec": 0, 00:23:32.144 "fast_io_fail_timeout_sec": 0, 00:23:32.144 "psk": "key0", 00:23:32.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.144 "hdgst": false, 00:23:32.144 "ddgst": false, 00:23:32.144 "multipath": "multipath" 00:23:32.144 } 00:23:32.144 }, 00:23:32.144 { 00:23:32.144 "method": "bdev_nvme_set_hotplug", 00:23:32.144 "params": { 00:23:32.144 "period_us": 100000, 00:23:32.144 "enable": false 00:23:32.144 } 00:23:32.144 }, 00:23:32.144 { 00:23:32.144 "method": "bdev_wait_for_examine" 00:23:32.144 } 00:23:32.144 ] 00:23:32.144 }, 00:23:32.144 { 00:23:32.144 "subsystem": "nbd", 00:23:32.144 "config": [] 00:23:32.144 } 00:23:32.144 ] 00:23:32.144 }' 00:23:32.144 18:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.402 [2024-11-28 18:25:44.249540] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:32.402 [2024-11-28 18:25:44.249689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2511971 ] 00:23:32.402 [2024-11-28 18:25:44.384796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.660 [2024-11-28 18:25:44.510445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.917 [2024-11-28 18:25:44.922297] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:33.483 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.483 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.483 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:33.483 Running I/O for 10 seconds... 00:23:35.350 2646.00 IOPS, 10.34 MiB/s [2024-11-28T17:25:48.823Z] 2677.50 IOPS, 10.46 MiB/s [2024-11-28T17:25:49.787Z] 2684.33 IOPS, 10.49 MiB/s [2024-11-28T17:25:50.745Z] 2674.75 IOPS, 10.45 MiB/s [2024-11-28T17:25:51.679Z] 2664.60 IOPS, 10.41 MiB/s [2024-11-28T17:25:52.611Z] 2665.17 IOPS, 10.41 MiB/s [2024-11-28T17:25:53.543Z] 2668.00 IOPS, 10.42 MiB/s [2024-11-28T17:25:54.475Z] 2669.50 IOPS, 10.43 MiB/s [2024-11-28T17:25:55.848Z] 2673.22 IOPS, 10.44 MiB/s [2024-11-28T17:25:55.848Z] 2676.50 IOPS, 10.46 MiB/s 00:23:43.748 Latency(us) 00:23:43.748 [2024-11-28T17:25:55.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.748 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:43.748 Verification LBA range: start 0x0 length 0x2000 00:23:43.748 TLSTESTn1 : 10.03 2681.20 10.47 0.00 0.00 47641.59 11068.30 40389.59 00:23:43.748 [2024-11-28T17:25:55.848Z] =================================================================================================================== 00:23:43.748 [2024-11-28T17:25:55.848Z] Total : 2681.20 10.47 0.00 0.00 47641.59 11068.30 40389.59 00:23:43.748 { 00:23:43.748 "results": [ 00:23:43.748 { 00:23:43.748 "job": "TLSTESTn1", 00:23:43.748 "core_mask": "0x4", 00:23:43.748 "workload": "verify", 00:23:43.748 "status": "finished", 00:23:43.748 "verify_range": { 00:23:43.748 "start": 0, 00:23:43.748 "length": 8192 00:23:43.748 }, 00:23:43.748 "queue_depth": 128, 00:23:43.748 "io_size": 4096, 00:23:43.748 "runtime": 10.029478, 00:23:43.748 "iops": 2681.1963693424523, 00:23:43.748 "mibps": 10.473423317743954, 00:23:43.748 "io_failed": 0, 00:23:43.748 "io_timeout": 0, 00:23:43.748 "avg_latency_us": 47641.58509957207, 00:23:43.748 "min_latency_us": 11068.302222222223, 00:23:43.748 "max_latency_us": 40389.59407407408 00:23:43.748 } 00:23:43.748 ], 00:23:43.748 "core_count": 1 00:23:43.748 } 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2511971 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2511971 ']' 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2511971 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511971 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511971' 00:23:43.748 killing process with pid 2511971 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2511971 00:23:43.748 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.748 00:23:43.748 Latency(us) 00:23:43.748 [2024-11-28T17:25:55.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.748 [2024-11-28T17:25:55.848Z] =================================================================================================================== 00:23:43.748 [2024-11-28T17:25:55.848Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.748 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2511971 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2511818 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2511818 ']' 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2511818 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511818 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511818' 00:23:44.315 killing process with pid 2511818 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2511818 00:23:44.315 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2511818 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2513564 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2513564 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2513564 ']' 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.690 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.690 [2024-11-28 18:25:57.666311] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:45.690 [2024-11-28 18:25:57.666475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.948 [2024-11-28 18:25:57.827045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.948 [2024-11-28 18:25:57.965184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.948 [2024-11-28 18:25:57.965278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.948 [2024-11-28 18:25:57.965304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.948 [2024-11-28 18:25:57.965329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.948 [2024-11-28 18:25:57.965349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.948 [2024-11-28 18:25:57.967007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.BR75MuXkWt 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BR75MuXkWt 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.884 [2024-11-28 18:25:58.908350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.884 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.141 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.399 [2024-11-28 18:25:59.449842] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.399 [2024-11-28 18:25:59.450210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.399 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.966 malloc0 00:23:47.966 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:47.966 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:48.225 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2513874 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2513874 /var/tmp/bdevperf.sock 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2513874 ']' 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.791 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 [2024-11-28 18:26:00.671204] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:48.791 [2024-11-28 18:26:00.671358] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513874 ] 00:23:48.791 [2024-11-28 18:26:00.805248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.049 [2024-11-28 18:26:00.932180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.614 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.614 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.614 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:49.872 18:26:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:50.129 [2024-11-28 18:26:02.147160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.387 nvme0n1 00:23:50.387 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.387 Running I/O for 1 seconds... 00:23:51.323 1889.00 IOPS, 7.38 MiB/s 00:23:51.323 Latency(us) 00:23:51.323 [2024-11-28T17:26:03.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.323 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:51.323 Verification LBA range: start 0x0 length 0x2000 00:23:51.323 nvme0n1 : 1.03 1965.48 7.68 0.00 0.00 64428.11 9806.13 57865.86 00:23:51.323 [2024-11-28T17:26:03.423Z] =================================================================================================================== 00:23:51.323 [2024-11-28T17:26:03.423Z] Total : 1965.48 7.68 0.00 0.00 64428.11 9806.13 57865.86 00:23:51.323 { 00:23:51.323 "results": [ 00:23:51.323 { 00:23:51.323 "job": "nvme0n1", 00:23:51.323 "core_mask": "0x2", 00:23:51.323 "workload": "verify", 00:23:51.323 "status": "finished", 00:23:51.323 "verify_range": { 00:23:51.323 "start": 0, 00:23:51.323 "length": 8192 00:23:51.323 }, 00:23:51.323 "queue_depth": 128, 00:23:51.323 "io_size": 4096, 00:23:51.323 "runtime": 1.026723, 00:23:51.323 "iops": 1965.4765696297834, 00:23:51.323 "mibps": 7.677642850116341, 00:23:51.323 "io_failed": 0, 00:23:51.323 "io_timeout": 0, 00:23:51.323 "avg_latency_us": 64428.10890430569, 00:23:51.323 "min_latency_us": 9806.127407407408, 00:23:51.323 "max_latency_us": 57865.86074074074 00:23:51.323 } 00:23:51.323 ], 00:23:51.323 "core_count": 1 00:23:51.323 } 00:23:51.323 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2513874 00:23:51.323 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2513874 ']' 00:23:51.323 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2513874 00:23:51.323 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.323 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.323 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2513874 00:23:51.582 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.582 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.582 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2513874' 00:23:51.582 killing process with pid 2513874 00:23:51.582 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2513874 00:23:51.582 Received shutdown signal, test time was about 1.000000 seconds 00:23:51.582 00:23:51.582 Latency(us) 00:23:51.582 [2024-11-28T17:26:03.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.582 [2024-11-28T17:26:03.682Z] =================================================================================================================== 00:23:51.582 [2024-11-28T17:26:03.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.582 18:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2513874 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2513564 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2513564 ']' 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2513564 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2513564 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2513564' 00:23:52.518 killing process with pid 2513564 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2513564 00:23:52.518 18:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2513564 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2514534 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2514534 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2514534 ']' 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.889 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.889 [2024-11-28 18:26:05.670665] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:53.889 [2024-11-28 18:26:05.670840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.889 [2024-11-28 18:26:05.823132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.889 [2024-11-28 18:26:05.959773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.889 [2024-11-28 18:26:05.959868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.889 [2024-11-28 18:26:05.959894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.889 [2024-11-28 18:26:05.959918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.889 [2024-11-28 18:26:05.959938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.889 [2024-11-28 18:26:05.961612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.820 [2024-11-28 18:26:06.687739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.820 malloc0 00:23:54.820 [2024-11-28 18:26:06.749578] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.820 [2024-11-28 18:26:06.749941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2514686 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2514686 /var/tmp/bdevperf.sock 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2514686 ']' 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.820 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.820 [2024-11-28 18:26:06.859201] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:54.820 [2024-11-28 18:26:06.859334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2514686 ] 00:23:55.079 [2024-11-28 18:26:07.004196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.079 [2024-11-28 18:26:07.140602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.013 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.013 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.013 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BR75MuXkWt 00:23:56.270 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.528 [2024-11-28 18:26:08.423839] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.528 nvme0n1 00:23:56.528 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:56.786 Running I/O for 1 seconds... 00:23:57.721 2361.00 IOPS, 9.22 MiB/s 00:23:57.721 Latency(us) 00:23:57.721 [2024-11-28T17:26:09.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.721 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:57.721 Verification LBA range: start 0x0 length 0x2000 00:23:57.721 nvme0n1 : 1.04 2403.31 9.39 0.00 0.00 52511.40 8592.50 56312.41 00:23:57.721 [2024-11-28T17:26:09.821Z] =================================================================================================================== 00:23:57.721 [2024-11-28T17:26:09.821Z] Total : 2403.31 9.39 0.00 0.00 52511.40 8592.50 56312.41 00:23:57.721 { 00:23:57.721 "results": [ 00:23:57.721 { 00:23:57.721 "job": "nvme0n1", 00:23:57.721 "core_mask": "0x2", 00:23:57.721 "workload": "verify", 00:23:57.721 "status": "finished", 00:23:57.721 "verify_range": { 00:23:57.721 "start": 0, 00:23:57.721 "length": 8192 00:23:57.721 }, 00:23:57.721 "queue_depth": 128, 00:23:57.721 "io_size": 4096, 00:23:57.721 "runtime": 1.035654, 00:23:57.721 "iops": 2403.312303143714, 00:23:57.721 "mibps": 9.387938684155133, 00:23:57.721 "io_failed": 0, 00:23:57.721 "io_timeout": 0, 00:23:57.721 "avg_latency_us": 52511.39897921223, 00:23:57.721 "min_latency_us": 8592.497777777779, 00:23:57.721 "max_latency_us": 56312.414814814816 00:23:57.721 } 00:23:57.721 ], 00:23:57.721 "core_count": 1 00:23:57.721 } 00:23:57.721 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:57.721 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.721 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.721 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.721 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:57.721 "subsystems": [ 00:23:57.721 { 00:23:57.721 "subsystem": "keyring", 00:23:57.721 "config": [ 00:23:57.721 { 00:23:57.721 "method": "keyring_file_add_key", 00:23:57.721 "params": { 00:23:57.721 "name": "key0", 00:23:57.721 "path": "/tmp/tmp.BR75MuXkWt" 00:23:57.721 } 00:23:57.721 } 00:23:57.721 ] 00:23:57.721 }, 00:23:57.721 { 00:23:57.721 "subsystem": "iobuf", 00:23:57.721 "config": [ 00:23:57.721 { 00:23:57.721 "method": "iobuf_set_options", 00:23:57.721 "params": { 00:23:57.721 "small_pool_count": 8192, 00:23:57.721 "large_pool_count": 1024, 00:23:57.721 "small_bufsize": 8192, 00:23:57.721 "large_bufsize": 135168, 00:23:57.721 "enable_numa": false 00:23:57.721 } 00:23:57.721 } 00:23:57.721 ] 00:23:57.721 }, 00:23:57.721 { 00:23:57.721 "subsystem": "sock", 00:23:57.721 "config": [ 00:23:57.721 { 00:23:57.721 "method": "sock_set_default_impl", 00:23:57.721 "params": { 00:23:57.722 "impl_name": "posix" 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "sock_impl_set_options", 00:23:57.722 "params": { 00:23:57.722 "impl_name": "ssl", 00:23:57.722 "recv_buf_size": 4096, 00:23:57.722 "send_buf_size": 4096, 00:23:57.722 "enable_recv_pipe": true, 00:23:57.722 "enable_quickack": false, 00:23:57.722 "enable_placement_id": 0, 00:23:57.722 "enable_zerocopy_send_server": true, 00:23:57.722 "enable_zerocopy_send_client": false, 00:23:57.722 "zerocopy_threshold": 0, 00:23:57.722 "tls_version": 0, 00:23:57.722 "enable_ktls": false 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "sock_impl_set_options", 00:23:57.722 "params": { 00:23:57.722 "impl_name": "posix", 00:23:57.722 "recv_buf_size": 2097152, 00:23:57.722 "send_buf_size": 2097152, 00:23:57.722 "enable_recv_pipe": true, 00:23:57.722 "enable_quickack": false, 00:23:57.722 "enable_placement_id": 0, 00:23:57.722 "enable_zerocopy_send_server": true, 00:23:57.722 "enable_zerocopy_send_client": false, 00:23:57.722 "zerocopy_threshold": 0, 00:23:57.722 "tls_version": 0, 00:23:57.722 "enable_ktls": false 00:23:57.722 } 00:23:57.722 } 00:23:57.722 ] 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "subsystem": "vmd", 00:23:57.722 "config": [] 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "subsystem": "accel", 00:23:57.722 "config": [ 00:23:57.722 { 00:23:57.722 "method": "accel_set_options", 00:23:57.722 "params": { 00:23:57.722 "small_cache_size": 128, 00:23:57.722 "large_cache_size": 16, 00:23:57.722 "task_count": 2048, 00:23:57.722 "sequence_count": 2048, 00:23:57.722 "buf_count": 2048 00:23:57.722 } 00:23:57.722 } 00:23:57.722 ] 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "subsystem": "bdev", 00:23:57.722 "config": [ 00:23:57.722 { 00:23:57.722 "method": "bdev_set_options", 00:23:57.722 "params": { 00:23:57.722 "bdev_io_pool_size": 65535, 00:23:57.722 "bdev_io_cache_size": 256, 00:23:57.722 "bdev_auto_examine": true, 00:23:57.722 "iobuf_small_cache_size": 128, 00:23:57.722 "iobuf_large_cache_size": 16 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "bdev_raid_set_options", 00:23:57.722 "params": { 00:23:57.722 "process_window_size_kb": 1024, 00:23:57.722 "process_max_bandwidth_mb_sec": 0 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "bdev_iscsi_set_options", 00:23:57.722 "params": { 00:23:57.722 "timeout_sec": 30 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "bdev_nvme_set_options", 00:23:57.722 "params": { 00:23:57.722 "action_on_timeout": "none", 00:23:57.722 "timeout_us": 0, 00:23:57.722 "timeout_admin_us": 0, 00:23:57.722 "keep_alive_timeout_ms": 10000, 00:23:57.722 "arbitration_burst": 0, 00:23:57.722 "low_priority_weight": 0, 00:23:57.722 "medium_priority_weight": 0, 00:23:57.722 "high_priority_weight": 0, 00:23:57.722 "nvme_adminq_poll_period_us": 10000, 00:23:57.722 "nvme_ioq_poll_period_us": 0, 00:23:57.722 "io_queue_requests": 0, 00:23:57.722 "delay_cmd_submit": true, 00:23:57.722 "transport_retry_count": 4, 00:23:57.722 "bdev_retry_count": 3, 00:23:57.722 "transport_ack_timeout": 0, 00:23:57.722 "ctrlr_loss_timeout_sec": 0, 00:23:57.722 "reconnect_delay_sec": 0, 00:23:57.722 "fast_io_fail_timeout_sec": 0, 00:23:57.722 "disable_auto_failback": false, 00:23:57.722 "generate_uuids": false, 00:23:57.722 "transport_tos": 0, 00:23:57.722 "nvme_error_stat": false, 00:23:57.722 "rdma_srq_size": 0, 00:23:57.722 "io_path_stat": false, 00:23:57.722 "allow_accel_sequence": false, 00:23:57.722 "rdma_max_cq_size": 0, 00:23:57.722 "rdma_cm_event_timeout_ms": 0, 00:23:57.722 "dhchap_digests": [ 00:23:57.722 "sha256", 00:23:57.722 "sha384", 00:23:57.722 "sha512" 00:23:57.722 ], 00:23:57.722 "dhchap_dhgroups": [ 00:23:57.722 "null", 00:23:57.722 "ffdhe2048", 00:23:57.722 "ffdhe3072", 00:23:57.722 "ffdhe4096", 00:23:57.722 "ffdhe6144", 00:23:57.722 "ffdhe8192" 00:23:57.722 ] 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "bdev_nvme_set_hotplug", 00:23:57.722 "params": { 00:23:57.722 "period_us": 100000, 00:23:57.722 "enable": false 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "bdev_malloc_create", 00:23:57.722 "params": { 00:23:57.722 "name": "malloc0", 00:23:57.722 "num_blocks": 8192, 00:23:57.722 "block_size": 4096, 00:23:57.722 "physical_block_size": 4096, 00:23:57.722 "uuid": "d6fc5c26-0996-4d28-adf4-042cc848d45c", 00:23:57.722 "optimal_io_boundary": 0, 00:23:57.722 "md_size": 0, 00:23:57.722 "dif_type": 0, 00:23:57.722 "dif_is_head_of_md": false, 00:23:57.722 "dif_pi_format": 0 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "bdev_wait_for_examine" 00:23:57.722 } 00:23:57.722 ] 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "subsystem": "nbd", 00:23:57.722 "config": [] 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "subsystem": "scheduler", 00:23:57.722 "config": [ 00:23:57.722 { 00:23:57.722 "method": "framework_set_scheduler", 00:23:57.722 "params": { 00:23:57.722 "name": "static" 00:23:57.722 } 00:23:57.722 } 00:23:57.722 ] 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "subsystem": "nvmf", 00:23:57.722 "config": [ 00:23:57.722 { 00:23:57.722 "method": "nvmf_set_config", 00:23:57.722 "params": { 00:23:57.722 "discovery_filter": "match_any", 00:23:57.722 "admin_cmd_passthru": { 00:23:57.722 "identify_ctrlr": false 00:23:57.722 }, 00:23:57.722 "dhchap_digests": [ 00:23:57.722 "sha256", 00:23:57.722 "sha384", 00:23:57.722 "sha512" 00:23:57.722 ], 00:23:57.722 "dhchap_dhgroups": [ 00:23:57.722 "null", 00:23:57.722 "ffdhe2048", 00:23:57.722 "ffdhe3072", 00:23:57.722 "ffdhe4096", 00:23:57.722 "ffdhe6144", 00:23:57.722 "ffdhe8192" 00:23:57.722 ] 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_set_max_subsystems", 00:23:57.722 "params": { 00:23:57.722 "max_subsystems": 1024 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_set_crdt", 00:23:57.722 "params": { 00:23:57.722 "crdt1": 0, 00:23:57.722 "crdt2": 0, 00:23:57.722 "crdt3": 0 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_create_transport", 00:23:57.722 "params": { 00:23:57.722 "trtype": "TCP", 00:23:57.722 "max_queue_depth": 128, 00:23:57.722 "max_io_qpairs_per_ctrlr": 127, 00:23:57.722 "in_capsule_data_size": 4096, 00:23:57.722 "max_io_size": 131072, 00:23:57.722 "io_unit_size": 131072, 00:23:57.722 "max_aq_depth": 128, 00:23:57.722 "num_shared_buffers": 511, 00:23:57.722 "buf_cache_size": 4294967295, 00:23:57.722 "dif_insert_or_strip": false, 00:23:57.722 "zcopy": false, 00:23:57.722 "c2h_success": false, 00:23:57.722 "sock_priority": 0, 00:23:57.722 "abort_timeout_sec": 1, 00:23:57.722 "ack_timeout": 0, 00:23:57.722 "data_wr_pool_size": 0 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_create_subsystem", 00:23:57.722 "params": { 00:23:57.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.722 "allow_any_host": false, 00:23:57.722 "serial_number": "00000000000000000000", 00:23:57.722 "model_number": "SPDK bdev Controller", 00:23:57.722 "max_namespaces": 32, 00:23:57.722 "min_cntlid": 1, 00:23:57.722 "max_cntlid": 65519, 00:23:57.722 "ana_reporting": false 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_subsystem_add_host", 00:23:57.722 "params": { 00:23:57.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.722 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.722 "psk": "key0" 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_subsystem_add_ns", 00:23:57.722 "params": { 00:23:57.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.722 "namespace": { 00:23:57.722 "nsid": 1, 00:23:57.722 "bdev_name": "malloc0", 00:23:57.722 "nguid": "D6FC5C2609964D28ADF4042CC848D45C", 00:23:57.722 "uuid": "d6fc5c26-0996-4d28-adf4-042cc848d45c", 00:23:57.722 "no_auto_visible": false 00:23:57.722 } 00:23:57.722 } 00:23:57.722 }, 00:23:57.722 { 00:23:57.722 "method": "nvmf_subsystem_add_listener", 00:23:57.722 "params": { 00:23:57.722 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.722 "listen_address": { 00:23:57.722 "trtype": "TCP", 00:23:57.722 "adrfam": "IPv4", 00:23:57.722 "traddr": "10.0.0.2", 00:23:57.722 "trsvcid": "4420" 00:23:57.722 }, 00:23:57.722 "secure_channel": false, 00:23:57.722 "sock_impl": "ssl" 00:23:57.722 } 00:23:57.722 } 00:23:57.722 ] 00:23:57.722 } 00:23:57.722 ] 00:23:57.722 }' 00:23:57.722 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:58.289 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:58.289 "subsystems": [ 00:23:58.289 { 00:23:58.289 "subsystem": "keyring", 00:23:58.289 "config": [ 00:23:58.289 { 00:23:58.289 "method": "keyring_file_add_key", 00:23:58.289 "params": { 00:23:58.289 "name": "key0", 00:23:58.289 "path": "/tmp/tmp.BR75MuXkWt" 00:23:58.289 } 00:23:58.289 } 00:23:58.289 ] 00:23:58.289 }, 00:23:58.289 { 00:23:58.289 "subsystem": "iobuf", 00:23:58.289 "config": [ 00:23:58.289 { 00:23:58.289 "method": "iobuf_set_options", 00:23:58.289 "params": { 00:23:58.289 "small_pool_count": 8192, 00:23:58.289 "large_pool_count": 1024, 00:23:58.289 "small_bufsize": 8192, 00:23:58.289 "large_bufsize": 135168, 00:23:58.289 "enable_numa": false 00:23:58.289 } 00:23:58.289 } 00:23:58.289 ] 00:23:58.289 }, 00:23:58.289 { 00:23:58.289 "subsystem": "sock", 00:23:58.289 "config": [ 00:23:58.289 { 00:23:58.289 "method": "sock_set_default_impl", 00:23:58.289 "params": { 00:23:58.289 "impl_name": "posix" 00:23:58.289 } 00:23:58.289 }, 00:23:58.289 { 00:23:58.289 "method": "sock_impl_set_options", 00:23:58.289 "params": { 00:23:58.289 "impl_name": "ssl", 00:23:58.289 "recv_buf_size": 4096, 00:23:58.289 "send_buf_size": 4096, 00:23:58.289 "enable_recv_pipe": true, 00:23:58.289 "enable_quickack": false, 00:23:58.289 "enable_placement_id": 0, 00:23:58.289 "enable_zerocopy_send_server": true, 00:23:58.289 "enable_zerocopy_send_client": false, 00:23:58.289 "zerocopy_threshold": 0, 00:23:58.289 "tls_version": 0, 00:23:58.289 "enable_ktls": false 00:23:58.289 } 00:23:58.289 }, 00:23:58.289 { 00:23:58.289 "method": "sock_impl_set_options", 00:23:58.289 "params": { 00:23:58.289 "impl_name": "posix", 00:23:58.289 "recv_buf_size": 2097152, 00:23:58.289 "send_buf_size": 2097152, 00:23:58.289 "enable_recv_pipe": true, 00:23:58.289 "enable_quickack": false, 00:23:58.289 "enable_placement_id": 0, 00:23:58.289 "enable_zerocopy_send_server": true, 00:23:58.289 "enable_zerocopy_send_client": false, 00:23:58.289 "zerocopy_threshold": 0, 00:23:58.290 "tls_version": 0, 00:23:58.290 "enable_ktls": false 00:23:58.290 } 00:23:58.290 } 00:23:58.290 ] 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "subsystem": "vmd", 00:23:58.290 "config": [] 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "subsystem": "accel", 00:23:58.290 "config": [ 00:23:58.290 { 00:23:58.290 "method": "accel_set_options", 00:23:58.290 "params": { 00:23:58.290 "small_cache_size": 128, 00:23:58.290 "large_cache_size": 16, 00:23:58.290 "task_count": 2048, 00:23:58.290 "sequence_count": 2048, 00:23:58.290 "buf_count": 2048 00:23:58.290 } 00:23:58.290 } 00:23:58.290 ] 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "subsystem": "bdev", 00:23:58.290 "config": [ 00:23:58.290 { 00:23:58.290 "method": "bdev_set_options", 00:23:58.290 "params": { 00:23:58.290 "bdev_io_pool_size": 65535, 00:23:58.290 "bdev_io_cache_size": 256, 00:23:58.290 "bdev_auto_examine": true, 00:23:58.290 "iobuf_small_cache_size": 128, 00:23:58.290 "iobuf_large_cache_size": 16 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_raid_set_options", 00:23:58.290 "params": { 00:23:58.290 "process_window_size_kb": 1024, 00:23:58.290 "process_max_bandwidth_mb_sec": 0 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_iscsi_set_options", 00:23:58.290 "params": { 00:23:58.290 "timeout_sec": 30 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_nvme_set_options", 00:23:58.290 "params": { 00:23:58.290 "action_on_timeout": "none", 00:23:58.290 "timeout_us": 0, 00:23:58.290 "timeout_admin_us": 0, 00:23:58.290 "keep_alive_timeout_ms": 10000, 00:23:58.290 "arbitration_burst": 0, 00:23:58.290 "low_priority_weight": 0, 00:23:58.290 "medium_priority_weight": 0, 00:23:58.290 "high_priority_weight": 0, 00:23:58.290 "nvme_adminq_poll_period_us": 10000, 00:23:58.290 "nvme_ioq_poll_period_us": 0, 00:23:58.290 "io_queue_requests": 512, 00:23:58.290 "delay_cmd_submit": true, 00:23:58.290 "transport_retry_count": 4, 00:23:58.290 "bdev_retry_count": 3, 00:23:58.290 "transport_ack_timeout": 0, 00:23:58.290 "ctrlr_loss_timeout_sec": 0, 00:23:58.290 "reconnect_delay_sec": 0, 00:23:58.290 "fast_io_fail_timeout_sec": 0, 00:23:58.290 "disable_auto_failback": false, 00:23:58.290 "generate_uuids": false, 00:23:58.290 "transport_tos": 0, 00:23:58.290 "nvme_error_stat": false, 00:23:58.290 "rdma_srq_size": 0, 00:23:58.290 "io_path_stat": false, 00:23:58.290 "allow_accel_sequence": false, 00:23:58.290 "rdma_max_cq_size": 0, 00:23:58.290 "rdma_cm_event_timeout_ms": 0, 00:23:58.290 "dhchap_digests": [ 00:23:58.290 "sha256", 00:23:58.290 "sha384", 00:23:58.290 "sha512" 00:23:58.290 ], 00:23:58.290 "dhchap_dhgroups": [ 00:23:58.290 "null", 00:23:58.290 "ffdhe2048", 00:23:58.290 "ffdhe3072", 00:23:58.290 "ffdhe4096", 00:23:58.290 "ffdhe6144", 00:23:58.290 "ffdhe8192" 00:23:58.290 ] 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_nvme_attach_controller", 00:23:58.290 "params": { 00:23:58.290 "name": "nvme0", 00:23:58.290 "trtype": "TCP", 00:23:58.290 "adrfam": "IPv4", 00:23:58.290 "traddr": "10.0.0.2", 00:23:58.290 "trsvcid": "4420", 00:23:58.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.290 "prchk_reftag": false, 00:23:58.290 "prchk_guard": false, 00:23:58.290 "ctrlr_loss_timeout_sec": 0, 00:23:58.290 "reconnect_delay_sec": 0, 00:23:58.290 "fast_io_fail_timeout_sec": 0, 00:23:58.290 "psk": "key0", 00:23:58.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.290 "hdgst": false, 00:23:58.290 "ddgst": false, 00:23:58.290 "multipath": "multipath" 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_nvme_set_hotplug", 00:23:58.290 "params": { 00:23:58.290 "period_us": 100000, 00:23:58.290 "enable": false 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_enable_histogram", 00:23:58.290 "params": { 00:23:58.290 "name": "nvme0n1", 00:23:58.290 "enable": true 00:23:58.290 } 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "method": "bdev_wait_for_examine" 00:23:58.290 } 00:23:58.290 ] 00:23:58.290 }, 00:23:58.290 { 00:23:58.290 "subsystem": "nbd", 00:23:58.290 "config": [] 00:23:58.290 } 00:23:58.290 ] 00:23:58.290 }' 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2514686 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2514686 ']' 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2514686 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2514686 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2514686' 00:23:58.290 killing process with pid 2514686 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2514686 00:23:58.290 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.290 00:23:58.290 Latency(us) 00:23:58.290 [2024-11-28T17:26:10.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.290 [2024-11-28T17:26:10.390Z] =================================================================================================================== 00:23:58.290 [2024-11-28T17:26:10.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.290 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2514686 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2514534 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2514534 ']' 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2514534 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2514534 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2514534' 00:23:59.224 killing process with pid 2514534 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2514534 00:23:59.224 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2514534 00:24:00.598 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:00.599 "subsystems": [ 00:24:00.599 { 00:24:00.599 "subsystem": "keyring", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "keyring_file_add_key", 00:24:00.599 "params": { 00:24:00.599 "name": "key0", 00:24:00.599 "path": "/tmp/tmp.BR75MuXkWt" 00:24:00.599 } 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "iobuf", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "iobuf_set_options", 00:24:00.599 "params": { 00:24:00.599 "small_pool_count": 8192, 00:24:00.599 "large_pool_count": 1024, 00:24:00.599 "small_bufsize": 8192, 00:24:00.599 "large_bufsize": 135168, 00:24:00.599 "enable_numa": false 00:24:00.599 } 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "sock", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "sock_set_default_impl", 00:24:00.599 "params": { 00:24:00.599 "impl_name": "posix" 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "sock_impl_set_options", 00:24:00.599 "params": { 00:24:00.599 "impl_name": "ssl", 00:24:00.599 "recv_buf_size": 4096, 00:24:00.599 "send_buf_size": 4096, 00:24:00.599 "enable_recv_pipe": true, 00:24:00.599 "enable_quickack": false, 00:24:00.599 "enable_placement_id": 0, 00:24:00.599 "enable_zerocopy_send_server": true, 00:24:00.599 "enable_zerocopy_send_client": false, 00:24:00.599 "zerocopy_threshold": 0, 00:24:00.599 "tls_version": 0, 00:24:00.599 "enable_ktls": false 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "sock_impl_set_options", 00:24:00.599 "params": { 00:24:00.599 "impl_name": "posix", 00:24:00.599 "recv_buf_size": 2097152, 00:24:00.599 "send_buf_size": 2097152, 00:24:00.599 "enable_recv_pipe": true, 00:24:00.599 "enable_quickack": false, 00:24:00.599 "enable_placement_id": 0, 00:24:00.599 "enable_zerocopy_send_server": true, 00:24:00.599 "enable_zerocopy_send_client": false, 00:24:00.599 "zerocopy_threshold": 0, 00:24:00.599 "tls_version": 0, 00:24:00.599 "enable_ktls": false 00:24:00.599 } 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "vmd", 00:24:00.599 "config": [] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "accel", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "accel_set_options", 00:24:00.599 "params": { 00:24:00.599 "small_cache_size": 128, 00:24:00.599 "large_cache_size": 16, 00:24:00.599 "task_count": 2048, 00:24:00.599 "sequence_count": 2048, 00:24:00.599 "buf_count": 2048 00:24:00.599 } 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "bdev", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "bdev_set_options", 00:24:00.599 "params": { 00:24:00.599 "bdev_io_pool_size": 65535, 00:24:00.599 "bdev_io_cache_size": 256, 00:24:00.599 "bdev_auto_examine": true, 00:24:00.599 "iobuf_small_cache_size": 128, 00:24:00.599 "iobuf_large_cache_size": 16 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "bdev_raid_set_options", 00:24:00.599 "params": { 00:24:00.599 "process_window_size_kb": 1024, 00:24:00.599 "process_max_bandwidth_mb_sec": 0 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "bdev_iscsi_set_options", 00:24:00.599 "params": { 00:24:00.599 "timeout_sec": 30 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "bdev_nvme_set_options", 00:24:00.599 "params": { 00:24:00.599 "action_on_timeout": "none", 00:24:00.599 "timeout_us": 0, 00:24:00.599 "timeout_admin_us": 0, 00:24:00.599 "keep_alive_timeout_ms": 10000, 00:24:00.599 "arbitration_burst": 0, 00:24:00.599 "low_priority_weight": 0, 00:24:00.599 "medium_priority_weight": 0, 00:24:00.599 "high_priority_weight": 0, 00:24:00.599 "nvme_adminq_poll_period_us": 10000, 00:24:00.599 "nvme_ioq_poll_period_us": 0, 00:24:00.599 "io_queue_requests": 0, 00:24:00.599 "delay_cmd_submit": true, 00:24:00.599 "transport_retry_count": 4, 00:24:00.599 "bdev_retry_count": 3, 00:24:00.599 "transport_ack_timeout": 0, 00:24:00.599 "ctrlr_loss_timeout_sec": 0, 00:24:00.599 "reconnect_delay_sec": 0, 00:24:00.599 "fast_io_fail_timeout_sec": 0, 00:24:00.599 "disable_auto_failback": false, 00:24:00.599 "generate_uuids": false, 00:24:00.599 "transport_tos": 0, 00:24:00.599 "nvme_error_stat": false, 00:24:00.599 "rdma_srq_size": 0, 00:24:00.599 "io_path_stat": false, 00:24:00.599 "allow_accel_sequence": false, 00:24:00.599 "rdma_max_cq_size": 0, 00:24:00.599 "rdma_cm_event_timeout_ms": 0, 00:24:00.599 "dhchap_digests": [ 00:24:00.599 "sha256", 00:24:00.599 "sha384", 00:24:00.599 "sha512" 00:24:00.599 ], 00:24:00.599 "dhchap_dhgroups": [ 00:24:00.599 "null", 00:24:00.599 "ffdhe2048", 00:24:00.599 "ffdhe3072", 00:24:00.599 "ffdhe4096", 00:24:00.599 "ffdhe6144", 00:24:00.599 "ffdhe8192" 00:24:00.599 ] 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "bdev_nvme_set_hotplug", 00:24:00.599 "params": { 00:24:00.599 "period_us": 100000, 00:24:00.599 "enable": false 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "bdev_malloc_create", 00:24:00.599 "params": { 00:24:00.599 "name": "malloc0", 00:24:00.599 "num_blocks": 8192, 00:24:00.599 "block_size": 4096, 00:24:00.599 "physical_block_size": 4096, 00:24:00.599 "uuid": "d6fc5c26-0996-4d28-adf4-042cc848d45c", 00:24:00.599 "optimal_io_boundary": 0, 00:24:00.599 "md_size": 0, 00:24:00.599 "dif_type": 0, 00:24:00.599 "dif_is_head_of_md": false, 00:24:00.599 "dif_pi_format": 0 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "bdev_wait_for_examine" 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "nbd", 00:24:00.599 "config": [] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "scheduler", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "framework_set_scheduler", 00:24:00.599 "params": { 00:24:00.599 "name": "static" 00:24:00.599 } 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "subsystem": "nvmf", 00:24:00.599 "config": [ 00:24:00.599 { 00:24:00.599 "method": "nvmf_set_config", 00:24:00.599 "params": { 00:24:00.599 "discovery_filter": "match_any", 00:24:00.599 "admin_cmd_passthru": { 00:24:00.599 "identify_ctrlr": false 00:24:00.599 }, 00:24:00.599 "dhchap_digests": [ 00:24:00.599 "sha256", 00:24:00.599 "sha384", 00:24:00.599 "sha512" 00:24:00.599 ], 00:24:00.599 "dhchap_dhgroups": [ 00:24:00.599 "null", 00:24:00.599 "ffdhe2048", 00:24:00.599 "ffdhe3072", 00:24:00.599 "ffdhe4096", 00:24:00.599 "ffdhe6144", 00:24:00.599 "ffdhe8192" 00:24:00.599 ] 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_set_max_subsystems", 00:24:00.599 "params": { 00:24:00.599 "max_subsystems": 1024 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_set_crdt", 00:24:00.599 "params": { 00:24:00.599 "crdt1": 0, 00:24:00.599 "crdt2": 0, 00:24:00.599 "crdt3": 0 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_create_transport", 00:24:00.599 "params": { 00:24:00.599 "trtype": "TCP", 00:24:00.599 "max_queue_depth": 128, 00:24:00.599 "max_io_qpairs_per_ctrlr": 127, 00:24:00.599 "in_capsule_data_size": 4096, 00:24:00.599 "max_io_size": 131072, 00:24:00.599 "io_unit_size": 131072, 00:24:00.599 "max_aq_depth": 128, 00:24:00.599 "num_shared_buffers": 511, 00:24:00.599 "buf_cache_size": 4294967295, 00:24:00.599 "dif_insert_or_strip": false, 00:24:00.599 "zcopy": false, 00:24:00.599 "c2h_success": false, 00:24:00.599 "sock_priority": 0, 00:24:00.599 "abort_timeout_sec": 1, 00:24:00.599 "ack_timeout": 0, 00:24:00.599 "data_wr_pool_size": 0 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_create_subsystem", 00:24:00.599 "params": { 00:24:00.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.599 "allow_any_host": false, 00:24:00.599 "serial_number": "00000000000000000000", 00:24:00.599 "model_number": "SPDK bdev Controller", 00:24:00.599 "max_namespaces": 32, 00:24:00.599 "min_cntlid": 1, 00:24:00.599 "max_cntlid": 65519, 00:24:00.599 "ana_reporting": false 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_subsystem_add_host", 00:24:00.599 "params": { 00:24:00.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.599 "host": "nqn.2016-06.io.spdk:host1", 00:24:00.599 "psk": "key0" 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_subsystem_add_ns", 00:24:00.599 "params": { 00:24:00.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.599 "namespace": { 00:24:00.599 "nsid": 1, 00:24:00.599 "bdev_name": "malloc0", 00:24:00.599 "nguid": "D6FC5C2609964D28ADF4042CC848D45C", 00:24:00.599 "uuid": "d6fc5c26-0996-4d28-adf4-042cc848d45c", 00:24:00.599 "no_auto_visible": false 00:24:00.599 } 00:24:00.599 } 00:24:00.599 }, 00:24:00.599 { 00:24:00.599 "method": "nvmf_subsystem_add_listener", 00:24:00.599 "params": { 00:24:00.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.599 "listen_address": { 00:24:00.599 "trtype": "TCP", 00:24:00.599 "adrfam": "IPv4", 00:24:00.599 "traddr": "10.0.0.2", 00:24:00.599 "trsvcid": "4420" 00:24:00.599 }, 00:24:00.599 "secure_channel": false, 00:24:00.599 "sock_impl": "ssl" 00:24:00.599 } 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 } 00:24:00.599 ] 00:24:00.599 }' 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2515359 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2515359 00:24:00.599 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2515359 ']' 00:24:00.600 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.600 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.600 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.600 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.600 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.600 [2024-11-28 18:26:12.485217] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:00.600 [2024-11-28 18:26:12.485357] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.600 [2024-11-28 18:26:12.625479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.856 [2024-11-28 18:26:12.756491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.856 [2024-11-28 18:26:12.756578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.856 [2024-11-28 18:26:12.756603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.856 [2024-11-28 18:26:12.756628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.856 [2024-11-28 18:26:12.756648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.856 [2024-11-28 18:26:12.758375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.420 [2024-11-28 18:26:13.320974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.420 [2024-11-28 18:26:13.353028] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.420 [2024-11-28 18:26:13.353375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2515497 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2515497 /var/tmp/bdevperf.sock 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2515497 ']' 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.420 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:01.420 "subsystems": [ 00:24:01.420 { 00:24:01.420 "subsystem": "keyring", 00:24:01.420 "config": [ 00:24:01.420 { 00:24:01.420 "method": "keyring_file_add_key", 00:24:01.420 "params": { 00:24:01.420 "name": "key0", 00:24:01.420 "path": "/tmp/tmp.BR75MuXkWt" 00:24:01.420 } 00:24:01.420 } 00:24:01.420 ] 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "subsystem": "iobuf", 00:24:01.420 "config": [ 00:24:01.420 { 00:24:01.420 "method": "iobuf_set_options", 00:24:01.420 "params": { 00:24:01.420 "small_pool_count": 8192, 00:24:01.420 "large_pool_count": 1024, 00:24:01.420 "small_bufsize": 8192, 00:24:01.420 "large_bufsize": 135168, 00:24:01.420 "enable_numa": false 00:24:01.420 } 00:24:01.420 } 00:24:01.420 ] 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "subsystem": "sock", 00:24:01.420 "config": [ 00:24:01.420 { 00:24:01.420 "method": "sock_set_default_impl", 00:24:01.420 "params": { 00:24:01.420 "impl_name": "posix" 00:24:01.420 } 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "method": "sock_impl_set_options", 00:24:01.420 "params": { 00:24:01.420 "impl_name": "ssl", 00:24:01.420 "recv_buf_size": 4096, 00:24:01.420 "send_buf_size": 4096, 00:24:01.420 "enable_recv_pipe": true, 00:24:01.420 "enable_quickack": false, 00:24:01.420 "enable_placement_id": 0, 00:24:01.420 "enable_zerocopy_send_server": true, 00:24:01.420 "enable_zerocopy_send_client": false, 00:24:01.420 "zerocopy_threshold": 0, 00:24:01.420 "tls_version": 0, 00:24:01.420 "enable_ktls": false 00:24:01.420 } 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "method": "sock_impl_set_options", 00:24:01.420 "params": { 00:24:01.420 "impl_name": "posix", 00:24:01.420 "recv_buf_size": 2097152, 00:24:01.420 "send_buf_size": 2097152, 00:24:01.420 "enable_recv_pipe": true, 00:24:01.420 "enable_quickack": false, 00:24:01.420 "enable_placement_id": 0, 00:24:01.420 "enable_zerocopy_send_server": true, 00:24:01.420 "enable_zerocopy_send_client": false, 00:24:01.420 "zerocopy_threshold": 0, 00:24:01.420 "tls_version": 0, 00:24:01.420 "enable_ktls": false 00:24:01.420 } 00:24:01.420 } 00:24:01.420 ] 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "subsystem": "vmd", 00:24:01.420 "config": [] 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "subsystem": "accel", 00:24:01.420 "config": [ 00:24:01.420 { 00:24:01.420 "method": "accel_set_options", 00:24:01.420 "params": { 00:24:01.420 "small_cache_size": 128, 00:24:01.420 "large_cache_size": 16, 00:24:01.420 "task_count": 2048, 00:24:01.420 "sequence_count": 2048, 00:24:01.420 "buf_count": 2048 00:24:01.420 } 00:24:01.420 } 00:24:01.420 ] 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "subsystem": "bdev", 00:24:01.420 "config": [ 00:24:01.420 { 00:24:01.420 "method": "bdev_set_options", 00:24:01.420 "params": { 00:24:01.420 "bdev_io_pool_size": 65535, 00:24:01.420 "bdev_io_cache_size": 256, 00:24:01.420 "bdev_auto_examine": true, 00:24:01.420 "iobuf_small_cache_size": 128, 00:24:01.420 "iobuf_large_cache_size": 16 00:24:01.420 } 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "method": "bdev_raid_set_options", 00:24:01.420 "params": { 00:24:01.420 "process_window_size_kb": 1024, 00:24:01.420 "process_max_bandwidth_mb_sec": 0 00:24:01.420 } 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "method": "bdev_iscsi_set_options", 00:24:01.420 "params": { 00:24:01.420 "timeout_sec": 30 00:24:01.420 } 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "method": "bdev_nvme_set_options", 00:24:01.420 "params": { 00:24:01.420 "action_on_timeout": "none", 00:24:01.420 "timeout_us": 0, 00:24:01.420 "timeout_admin_us": 0, 00:24:01.420 "keep_alive_timeout_ms": 10000, 00:24:01.420 "arbitration_burst": 0, 00:24:01.420 "low_priority_weight": 0, 00:24:01.420 "medium_priority_weight": 0, 00:24:01.420 "high_priority_weight": 0, 00:24:01.420 "nvme_adminq_poll_period_us": 10000, 00:24:01.420 "nvme_ioq_poll_period_us": 0, 00:24:01.420 "io_queue_requests": 512, 00:24:01.420 "delay_cmd_submit": true, 00:24:01.420 "transport_retry_count": 4, 00:24:01.420 "bdev_retry_count": 3, 00:24:01.420 "transport_ack_timeout": 0, 00:24:01.420 "ctrlr_loss_timeout_sec": 0, 00:24:01.420 "reconnect_delay_sec": 0, 00:24:01.420 "fast_io_fail_timeout_sec": 0, 00:24:01.420 "disable_auto_failback": false, 00:24:01.420 "generate_uuids": false, 00:24:01.420 "transport_tos": 0, 00:24:01.420 "nvme_error_stat": false, 00:24:01.420 "rdma_srq_size": 0, 00:24:01.420 "io_path_stat": false, 00:24:01.420 "allow_accel_sequence": false, 00:24:01.420 "rdma_max_cq_size": 0, 00:24:01.420 "rdma_cm_event_timeout_ms": 0, 00:24:01.420 "dhchap_digests": [ 00:24:01.420 "sha256", 00:24:01.420 "sha384", 00:24:01.420 "sha512" 00:24:01.420 ], 00:24:01.420 "dhchap_dhgroups": [ 00:24:01.420 "null", 00:24:01.420 "ffdhe2048", 00:24:01.420 "ffdhe3072", 00:24:01.420 "ffdhe4096", 00:24:01.420 "ffdhe6144", 00:24:01.420 "ffdhe8192" 00:24:01.420 ] 00:24:01.420 } 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "method": "bdev_nvme_attach_controller", 00:24:01.420 "params": { 00:24:01.420 "name": "nvme0", 00:24:01.420 "trtype": "TCP", 00:24:01.420 "adrfam": "IPv4", 00:24:01.420 "traddr": "10.0.0.2", 00:24:01.420 "trsvcid": "4420", 00:24:01.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.420 "prchk_reftag": false, 00:24:01.420 "prchk_guard": false, 00:24:01.420 "ctrlr_loss_timeout_sec": 0, 00:24:01.420 "reconnect_delay_sec": 0, 00:24:01.420 "fast_io_fail_timeout_sec": 0, 00:24:01.420 "psk": "key0", 00:24:01.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.420 "hdgst": false, 00:24:01.420 "ddgst": false, 00:24:01.421 "multipath": "multipath" 00:24:01.421 } 00:24:01.421 }, 00:24:01.421 { 00:24:01.421 "method": "bdev_nvme_set_hotplug", 00:24:01.421 "params": { 00:24:01.421 "period_us": 100000, 00:24:01.421 "enable": false 00:24:01.421 } 00:24:01.421 }, 00:24:01.421 { 00:24:01.421 "method": "bdev_enable_histogram", 00:24:01.421 "params": { 00:24:01.421 "name": "nvme0n1", 00:24:01.421 "enable": true 00:24:01.421 } 00:24:01.421 }, 00:24:01.421 { 00:24:01.421 "method": "bdev_wait_for_examine" 00:24:01.421 } 00:24:01.421 ] 00:24:01.421 }, 00:24:01.421 { 00:24:01.421 "subsystem": "nbd", 00:24:01.421 "config": [] 00:24:01.421 } 00:24:01.421 ] 00:24:01.421 }' 00:24:01.421 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.421 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.678 [2024-11-28 18:26:13.595128] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:01.678 [2024-11-28 18:26:13.595276] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2515497 ] 00:24:01.678 [2024-11-28 18:26:13.729712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.936 [2024-11-28 18:26:13.853612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.193 [2024-11-28 18:26:14.274829] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.450 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.450 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.450 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.450 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:03.013 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.013 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:03.013 Running I/O for 1 seconds... 00:24:03.944 2444.00 IOPS, 9.55 MiB/s 00:24:03.944 Latency(us) 00:24:03.944 [2024-11-28T17:26:16.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.944 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:03.944 Verification LBA range: start 0x0 length 0x2000 00:24:03.944 nvme0n1 : 1.03 2498.38 9.76 0.00 0.00 50568.50 9029.40 42137.22 00:24:03.944 [2024-11-28T17:26:16.044Z] =================================================================================================================== 00:24:03.944 [2024-11-28T17:26:16.044Z] Total : 2498.38 9.76 0.00 0.00 50568.50 9029.40 42137.22 00:24:03.944 { 00:24:03.944 "results": [ 00:24:03.944 { 00:24:03.944 "job": "nvme0n1", 00:24:03.944 "core_mask": "0x2", 00:24:03.944 "workload": "verify", 00:24:03.944 "status": "finished", 00:24:03.944 "verify_range": { 00:24:03.944 "start": 0, 00:24:03.944 "length": 8192 00:24:03.944 }, 00:24:03.944 "queue_depth": 128, 00:24:03.944 "io_size": 4096, 00:24:03.944 "runtime": 1.029466, 00:24:03.944 "iops": 2498.3826566394614, 00:24:03.944 "mibps": 9.759307252497896, 00:24:03.944 "io_failed": 0, 00:24:03.944 "io_timeout": 0, 00:24:03.944 "avg_latency_us": 50568.502072461255, 00:24:03.944 "min_latency_us": 9029.404444444444, 00:24:03.944 "max_latency_us": 42137.22074074074 00:24:03.944 } 00:24:03.944 ], 00:24:03.944 "core_count": 1 00:24:03.944 } 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.944 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.944 nvmf_trace.0 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2515497 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2515497 ']' 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2515497 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515497 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515497' 00:24:04.203 killing process with pid 2515497 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2515497 00:24:04.203 Received shutdown signal, test time was about 1.000000 seconds 00:24:04.203 00:24:04.203 Latency(us) 00:24:04.203 [2024-11-28T17:26:16.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.203 [2024-11-28T17:26:16.303Z] =================================================================================================================== 00:24:04.203 [2024-11-28T17:26:16.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.203 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2515497 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.137 rmmod nvme_tcp 00:24:05.137 rmmod nvme_fabrics 00:24:05.137 rmmod nvme_keyring 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2515359 ']' 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2515359 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2515359 ']' 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2515359 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2515359 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2515359' 00:24:05.137 killing process with pid 2515359 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2515359 00:24:05.137 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2515359 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.513 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.dRYHt1peap /tmp/tmp.57yj6NqKkl /tmp/tmp.BR75MuXkWt 00:24:08.414 00:24:08.414 real 1m53.102s 00:24:08.414 user 3m11.039s 00:24:08.414 sys 0m25.954s 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.414 ************************************ 00:24:08.414 END TEST nvmf_tls 00:24:08.414 ************************************ 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.414 ************************************ 00:24:08.414 START TEST nvmf_fips 00:24:08.414 ************************************ 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:08.414 * Looking for test storage... 00:24:08.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:08.414 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.673 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:08.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.673 --rc genhtml_branch_coverage=1 00:24:08.673 --rc genhtml_function_coverage=1 00:24:08.673 --rc genhtml_legend=1 00:24:08.674 --rc geninfo_all_blocks=1 00:24:08.674 --rc geninfo_unexecuted_blocks=1 00:24:08.674 00:24:08.674 ' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.674 --rc genhtml_branch_coverage=1 00:24:08.674 --rc genhtml_function_coverage=1 00:24:08.674 --rc genhtml_legend=1 00:24:08.674 --rc geninfo_all_blocks=1 00:24:08.674 --rc geninfo_unexecuted_blocks=1 00:24:08.674 00:24:08.674 ' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.674 --rc genhtml_branch_coverage=1 00:24:08.674 --rc genhtml_function_coverage=1 00:24:08.674 --rc genhtml_legend=1 00:24:08.674 --rc geninfo_all_blocks=1 00:24:08.674 --rc geninfo_unexecuted_blocks=1 00:24:08.674 00:24:08.674 ' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:08.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.674 --rc genhtml_branch_coverage=1 00:24:08.674 --rc genhtml_function_coverage=1 00:24:08.674 --rc genhtml_legend=1 00:24:08.674 --rc geninfo_all_blocks=1 00:24:08.674 --rc geninfo_unexecuted_blocks=1 00:24:08.674 00:24:08.674 ' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:08.674 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:08.675 Error setting digest 00:24:08.675 4052F4E6797F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:08.675 4052F4E6797F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.675 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:11.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:11.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:11.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:11.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.260 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:11.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:24:11.261 00:24:11.261 --- 10.0.0.2 ping statistics --- 00:24:11.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.261 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:24:11.261 00:24:11.261 --- 10.0.0.1 ping statistics --- 00:24:11.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.261 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2518018 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2518018 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2518018 ']' 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.261 18:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:11.261 [2024-11-28 18:26:22.999324] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:11.261 [2024-11-28 18:26:22.999484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.261 [2024-11-28 18:26:23.153970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.261 [2024-11-28 18:26:23.293854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.261 [2024-11-28 18:26:23.293940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.261 [2024-11-28 18:26:23.293965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.261 [2024-11-28 18:26:23.293989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.261 [2024-11-28 18:26:23.294008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.261 [2024-11-28 18:26:23.295612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.bma 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.bma 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.bma 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.bma 00:24:12.204 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.204 [2024-11-28 18:26:24.236388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.204 [2024-11-28 18:26:24.252365] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:12.204 [2024-11-28 18:26:24.252676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.461 malloc0 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2518178 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2518178 /var/tmp/bdevperf.sock 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2518178 ']' 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.461 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.462 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.462 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:12.462 [2024-11-28 18:26:24.465839] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:12.462 [2024-11-28 18:26:24.465996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2518178 ] 00:24:12.720 [2024-11-28 18:26:24.600439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.720 [2024-11-28 18:26:24.721421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.284 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.284 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:13.284 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.bma 00:24:13.850 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.850 [2024-11-28 18:26:25.897356] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:14.108 TLSTESTn1 00:24:14.108 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:14.108 Running I/O for 10 seconds... 00:24:16.416 2369.00 IOPS, 9.25 MiB/s [2024-11-28T17:26:29.451Z] 2434.50 IOPS, 9.51 MiB/s [2024-11-28T17:26:30.385Z] 2468.00 IOPS, 9.64 MiB/s [2024-11-28T17:26:31.321Z] 2470.75 IOPS, 9.65 MiB/s [2024-11-28T17:26:32.256Z] 2473.40 IOPS, 9.66 MiB/s [2024-11-28T17:26:33.192Z] 2475.67 IOPS, 9.67 MiB/s [2024-11-28T17:26:34.567Z] 2478.86 IOPS, 9.68 MiB/s [2024-11-28T17:26:35.501Z] 2480.38 IOPS, 9.69 MiB/s [2024-11-28T17:26:36.436Z] 2482.33 IOPS, 9.70 MiB/s [2024-11-28T17:26:36.436Z] 2481.90 IOPS, 9.69 MiB/s 00:24:24.336 Latency(us) 00:24:24.336 [2024-11-28T17:26:36.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.336 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:24.336 Verification LBA range: start 0x0 length 0x2000 00:24:24.336 TLSTESTn1 : 10.03 2487.72 9.72 0.00 0.00 51355.67 11796.48 49321.91 00:24:24.336 [2024-11-28T17:26:36.436Z] =================================================================================================================== 00:24:24.336 [2024-11-28T17:26:36.436Z] Total : 2487.72 9.72 0.00 0.00 51355.67 11796.48 49321.91 00:24:24.336 { 00:24:24.336 "results": [ 00:24:24.336 { 00:24:24.336 "job": "TLSTESTn1", 00:24:24.336 "core_mask": "0x4", 00:24:24.336 "workload": "verify", 00:24:24.336 "status": "finished", 00:24:24.336 "verify_range": { 00:24:24.336 "start": 0, 00:24:24.336 "length": 8192 00:24:24.336 }, 00:24:24.336 "queue_depth": 128, 00:24:24.336 "io_size": 4096, 00:24:24.336 "runtime": 10.027661, 00:24:24.336 "iops": 2487.7187212451636, 00:24:24.336 "mibps": 9.71765125486392, 00:24:24.336 "io_failed": 0, 00:24:24.336 "io_timeout": 0, 00:24:24.336 "avg_latency_us": 51355.671376454615, 00:24:24.336 "min_latency_us": 11796.48, 00:24:24.336 "max_latency_us": 49321.90814814815 00:24:24.336 } 00:24:24.336 ], 00:24:24.336 "core_count": 1 00:24:24.336 } 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:24.336 nvmf_trace.0 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2518178 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2518178 ']' 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2518178 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518178 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518178' 00:24:24.336 killing process with pid 2518178 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2518178 00:24:24.336 Received shutdown signal, test time was about 10.000000 seconds 00:24:24.336 00:24:24.336 Latency(us) 00:24:24.336 [2024-11-28T17:26:36.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.336 [2024-11-28T17:26:36.436Z] =================================================================================================================== 00:24:24.336 [2024-11-28T17:26:36.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.336 18:26:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2518178 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.271 rmmod nvme_tcp 00:24:25.271 rmmod nvme_fabrics 00:24:25.271 rmmod nvme_keyring 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2518018 ']' 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2518018 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2518018 ']' 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2518018 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518018 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518018' 00:24:25.271 killing process with pid 2518018 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2518018 00:24:25.271 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2518018 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.647 18:26:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.bma 00:24:28.553 00:24:28.553 real 0m20.216s 00:24:28.553 user 0m24.929s 00:24:28.553 sys 0m6.423s 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 ************************************ 00:24:28.553 END TEST nvmf_fips 00:24:28.553 ************************************ 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:28.553 ************************************ 00:24:28.553 START TEST nvmf_control_msg_list 00:24:28.553 ************************************ 00:24:28.553 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:28.811 * Looking for test storage... 00:24:28.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:28.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.812 --rc genhtml_branch_coverage=1 00:24:28.812 --rc genhtml_function_coverage=1 00:24:28.812 --rc genhtml_legend=1 00:24:28.812 --rc geninfo_all_blocks=1 00:24:28.812 --rc geninfo_unexecuted_blocks=1 00:24:28.812 00:24:28.812 ' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:28.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.812 --rc genhtml_branch_coverage=1 00:24:28.812 --rc genhtml_function_coverage=1 00:24:28.812 --rc genhtml_legend=1 00:24:28.812 --rc geninfo_all_blocks=1 00:24:28.812 --rc geninfo_unexecuted_blocks=1 00:24:28.812 00:24:28.812 ' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:28.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.812 --rc genhtml_branch_coverage=1 00:24:28.812 --rc genhtml_function_coverage=1 00:24:28.812 --rc genhtml_legend=1 00:24:28.812 --rc geninfo_all_blocks=1 00:24:28.812 --rc geninfo_unexecuted_blocks=1 00:24:28.812 00:24:28.812 ' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:28.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.812 --rc genhtml_branch_coverage=1 00:24:28.812 --rc genhtml_function_coverage=1 00:24:28.812 --rc genhtml_legend=1 00:24:28.812 --rc geninfo_all_blocks=1 00:24:28.812 --rc geninfo_unexecuted_blocks=1 00:24:28.812 00:24:28.812 ' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.812 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.813 18:26:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:30.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:30.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:30.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:30.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.717 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.718 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:24:30.976 00:24:30.976 --- 10.0.0.2 ping statistics --- 00:24:30.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.976 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:30.976 00:24:30.976 --- 10.0.0.1 ping statistics --- 00:24:30.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.976 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2521701 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2521701 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2521701 ']' 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.976 18:26:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:30.976 [2024-11-28 18:26:42.972266] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:30.976 [2024-11-28 18:26:42.972411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.235 [2024-11-28 18:26:43.141222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.235 [2024-11-28 18:26:43.283104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.235 [2024-11-28 18:26:43.283173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.235 [2024-11-28 18:26:43.283198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.235 [2024-11-28 18:26:43.283222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.235 [2024-11-28 18:26:43.283241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.235 [2024-11-28 18:26:43.284873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.168 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.169 [2024-11-28 18:26:43.980798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.169 18:26:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.169 Malloc0 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:32.169 [2024-11-28 18:26:44.051583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2521859 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2521860 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2521861 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.169 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2521859 00:24:32.169 [2024-11-28 18:26:44.182286] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.169 [2024-11-28 18:26:44.182782] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:32.169 [2024-11-28 18:26:44.183195] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:33.542 Initializing NVMe Controllers 00:24:33.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:33.542 Initialization complete. Launching workers. 00:24:33.542 ======================================================== 00:24:33.542 Latency(us) 00:24:33.542 Device Information : IOPS MiB/s Average min max 00:24:33.542 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2899.99 11.33 344.28 246.49 551.22 00:24:33.542 ======================================================== 00:24:33.542 Total : 2899.99 11.33 344.28 246.49 551.22 00:24:33.542 00:24:33.542 Initializing NVMe Controllers 00:24:33.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:33.542 Initialization complete. Launching workers. 00:24:33.542 ======================================================== 00:24:33.542 Latency(us) 00:24:33.542 Device Information : IOPS MiB/s Average min max 00:24:33.542 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2894.00 11.30 344.89 290.92 641.72 00:24:33.542 ======================================================== 00:24:33.542 Total : 2894.00 11.30 344.89 290.92 641.72 00:24:33.542 00:24:33.542 Initializing NVMe Controllers 00:24:33.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:33.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:33.542 Initialization complete. Launching workers. 00:24:33.542 ======================================================== 00:24:33.542 Latency(us) 00:24:33.542 Device Information : IOPS MiB/s Average min max 00:24:33.542 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40896.30 40748.84 41077.17 00:24:33.542 ======================================================== 00:24:33.542 Total : 25.00 0.10 40896.30 40748.84 41077.17 00:24:33.542 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2521860 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2521861 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.542 rmmod nvme_tcp 00:24:33.542 rmmod nvme_fabrics 00:24:33.542 rmmod nvme_keyring 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2521701 ']' 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2521701 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2521701 ']' 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2521701 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2521701 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.542 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2521701' 00:24:33.542 killing process with pid 2521701 00:24:33.543 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2521701 00:24:33.543 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2521701 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.919 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.836 00:24:36.836 real 0m8.096s 00:24:36.836 user 0m7.452s 00:24:36.836 sys 0m2.857s 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:36.836 ************************************ 00:24:36.836 END TEST nvmf_control_msg_list 00:24:36.836 ************************************ 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.836 ************************************ 00:24:36.836 START TEST nvmf_wait_for_buf 00:24:36.836 ************************************ 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:36.836 * Looking for test storage... 00:24:36.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.836 --rc genhtml_branch_coverage=1 00:24:36.836 --rc genhtml_function_coverage=1 00:24:36.836 --rc genhtml_legend=1 00:24:36.836 --rc geninfo_all_blocks=1 00:24:36.836 --rc geninfo_unexecuted_blocks=1 00:24:36.836 00:24:36.836 ' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.836 --rc genhtml_branch_coverage=1 00:24:36.836 --rc genhtml_function_coverage=1 00:24:36.836 --rc genhtml_legend=1 00:24:36.836 --rc geninfo_all_blocks=1 00:24:36.836 --rc geninfo_unexecuted_blocks=1 00:24:36.836 00:24:36.836 ' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.836 --rc genhtml_branch_coverage=1 00:24:36.836 --rc genhtml_function_coverage=1 00:24:36.836 --rc genhtml_legend=1 00:24:36.836 --rc geninfo_all_blocks=1 00:24:36.836 --rc geninfo_unexecuted_blocks=1 00:24:36.836 00:24:36.836 ' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.836 --rc genhtml_branch_coverage=1 00:24:36.836 --rc genhtml_function_coverage=1 00:24:36.836 --rc genhtml_legend=1 00:24:36.836 --rc geninfo_all_blocks=1 00:24:36.836 --rc geninfo_unexecuted_blocks=1 00:24:36.836 00:24:36.836 ' 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.836 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.837 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.096 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.998 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:38.999 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:38.999 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:38.999 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:38.999 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.999 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:24:39.258 00:24:39.258 --- 10.0.0.2 ping statistics --- 00:24:39.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.258 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:24:39.258 00:24:39.258 --- 10.0.0.1 ping statistics --- 00:24:39.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.258 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2524080 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2524080 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2524080 ']' 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.258 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.259 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.259 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:39.259 [2024-11-28 18:26:51.337447] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:39.259 [2024-11-28 18:26:51.337581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.517 [2024-11-28 18:26:51.492208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.775 [2024-11-28 18:26:51.631780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.775 [2024-11-28 18:26:51.631864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.775 [2024-11-28 18:26:51.631890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.775 [2024-11-28 18:26:51.631915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.775 [2024-11-28 18:26:51.631935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.775 [2024-11-28 18:26:51.633596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.341 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.599 Malloc0 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.599 [2024-11-28 18:26:52.680867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.599 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.857 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.857 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:40.857 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.857 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.857 [2024-11-28 18:26:52.705147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.857 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.857 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.858 [2024-11-28 18:26:52.858695] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:42.758 Initializing NVMe Controllers 00:24:42.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:42.759 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:42.759 Initialization complete. Launching workers. 00:24:42.759 ======================================================== 00:24:42.759 Latency(us) 00:24:42.759 Device Information : IOPS MiB/s Average min max 00:24:42.759 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 114.00 14.25 36530.66 7920.61 71843.04 00:24:42.759 ======================================================== 00:24:42.759 Total : 114.00 14.25 36530.66 7920.61 71843.04 00:24:42.759 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1798 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1798 -eq 0 ]] 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.759 rmmod nvme_tcp 00:24:42.759 rmmod nvme_fabrics 00:24:42.759 rmmod nvme_keyring 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2524080 ']' 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2524080 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2524080 ']' 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2524080 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524080 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524080' 00:24:42.759 killing process with pid 2524080 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2524080 00:24:42.759 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2524080 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.692 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.693 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.243 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:46.243 00:24:46.243 real 0m9.028s 00:24:46.243 user 0m5.447s 00:24:46.243 sys 0m2.275s 00:24:46.243 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.243 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:46.243 ************************************ 00:24:46.243 END TEST nvmf_wait_for_buf 00:24:46.243 ************************************ 00:24:46.243 18:26:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:46.243 18:26:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:46.244 ************************************ 00:24:46.244 START TEST nvmf_fuzz 00:24:46.244 ************************************ 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.244 * Looking for test storage... 00:24:46.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:46.244 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:46.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.244 --rc genhtml_branch_coverage=1 00:24:46.244 --rc genhtml_function_coverage=1 00:24:46.244 --rc genhtml_legend=1 00:24:46.244 --rc geninfo_all_blocks=1 00:24:46.244 --rc geninfo_unexecuted_blocks=1 00:24:46.244 00:24:46.244 ' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:46.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.244 --rc genhtml_branch_coverage=1 00:24:46.244 --rc genhtml_function_coverage=1 00:24:46.244 --rc genhtml_legend=1 00:24:46.244 --rc geninfo_all_blocks=1 00:24:46.244 --rc geninfo_unexecuted_blocks=1 00:24:46.244 00:24:46.244 ' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:46.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.244 --rc genhtml_branch_coverage=1 00:24:46.244 --rc genhtml_function_coverage=1 00:24:46.244 --rc genhtml_legend=1 00:24:46.244 --rc geninfo_all_blocks=1 00:24:46.244 --rc geninfo_unexecuted_blocks=1 00:24:46.244 00:24:46.244 ' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:46.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.244 --rc genhtml_branch_coverage=1 00:24:46.244 --rc genhtml_function_coverage=1 00:24:46.244 --rc genhtml_legend=1 00:24:46.244 --rc geninfo_all_blocks=1 00:24:46.244 --rc geninfo_unexecuted_blocks=1 00:24:46.244 00:24:46.244 ' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.244 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.245 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:48.211 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:48.211 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:48.211 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.211 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:48.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:24:48.212 00:24:48.212 --- 10.0.0.2 ping statistics --- 00:24:48.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.212 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:24:48.212 00:24:48.212 --- 10.0.0.1 ping statistics --- 00:24:48.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.212 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2526583 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2526583 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2526583 ']' 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.212 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.586 Malloc0 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:49.586 18:27:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:21.689 Fuzzing completed. Shutting down the fuzz application 00:25:21.689 00:25:21.689 Dumping successful admin opcodes: 00:25:21.689 9, 10, 00:25:21.689 Dumping successful io opcodes: 00:25:21.689 0, 9, 00:25:21.689 NS: 0x2000008efec0 I/O qp, Total commands completed: 313246, total successful commands: 1848, random_seed: 771472832 00:25:21.689 NS: 0x2000008efec0 admin qp, Total commands completed: 39456, total successful commands: 10, random_seed: 3479057152 00:25:21.689 18:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:22.256 Fuzzing completed. Shutting down the fuzz application 00:25:22.256 00:25:22.256 Dumping successful admin opcodes: 00:25:22.256 00:25:22.256 Dumping successful io opcodes: 00:25:22.256 00:25:22.256 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1299608953 00:25:22.256 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1299873745 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:22.256 rmmod nvme_tcp 00:25:22.256 rmmod nvme_fabrics 00:25:22.256 rmmod nvme_keyring 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 2526583 ']' 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 2526583 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2526583 ']' 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 2526583 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2526583 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2526583' 00:25:22.256 killing process with pid 2526583 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 2526583 00:25:22.256 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 2526583 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.631 18:27:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.535 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:25.535 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:25.535 00:25:25.535 real 0m39.682s 00:25:25.535 user 0m57.030s 00:25:25.535 sys 0m13.065s 00:25:25.535 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:25.536 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:25.536 ************************************ 00:25:25.536 END TEST nvmf_fuzz 00:25:25.536 ************************************ 00:25:25.536 18:27:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:25.536 18:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:25.536 18:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:25.536 18:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:25.536 ************************************ 00:25:25.536 START TEST nvmf_multiconnection 00:25:25.536 ************************************ 00:25:25.536 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:25.794 * Looking for test storage... 00:25:25.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.794 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:25.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.795 --rc genhtml_branch_coverage=1 00:25:25.795 --rc genhtml_function_coverage=1 00:25:25.795 --rc genhtml_legend=1 00:25:25.795 --rc geninfo_all_blocks=1 00:25:25.795 --rc geninfo_unexecuted_blocks=1 00:25:25.795 00:25:25.795 ' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:25.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.795 --rc genhtml_branch_coverage=1 00:25:25.795 --rc genhtml_function_coverage=1 00:25:25.795 --rc genhtml_legend=1 00:25:25.795 --rc geninfo_all_blocks=1 00:25:25.795 --rc geninfo_unexecuted_blocks=1 00:25:25.795 00:25:25.795 ' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:25.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.795 --rc genhtml_branch_coverage=1 00:25:25.795 --rc genhtml_function_coverage=1 00:25:25.795 --rc genhtml_legend=1 00:25:25.795 --rc geninfo_all_blocks=1 00:25:25.795 --rc geninfo_unexecuted_blocks=1 00:25:25.795 00:25:25.795 ' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:25.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.795 --rc genhtml_branch_coverage=1 00:25:25.795 --rc genhtml_function_coverage=1 00:25:25.795 --rc genhtml_legend=1 00:25:25.795 --rc geninfo_all_blocks=1 00:25:25.795 --rc geninfo_unexecuted_blocks=1 00:25:25.795 00:25:25.795 ' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.795 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:25.796 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:25.796 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:25.796 18:27:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.695 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:27.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:27.696 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:27.696 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:27.955 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:27.955 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:25:27.955 00:25:27.955 --- 10.0.0.2 ping statistics --- 00:25:27.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.955 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:25:27.955 00:25:27.955 --- 10.0.0.1 ping statistics --- 00:25:27.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.955 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.955 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=2533167 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 2533167 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 2533167 ']' 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.956 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:28.214 [2024-11-28 18:27:40.069903] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:25:28.214 [2024-11-28 18:27:40.070064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.214 [2024-11-28 18:27:40.233903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:28.472 [2024-11-28 18:27:40.375925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.472 [2024-11-28 18:27:40.376014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.472 [2024-11-28 18:27:40.376048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:28.472 [2024-11-28 18:27:40.376072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:28.472 [2024-11-28 18:27:40.376091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.472 [2024-11-28 18:27:40.378891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.472 [2024-11-28 18:27:40.378953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.472 [2024-11-28 18:27:40.379007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.472 [2024-11-28 18:27:40.379013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.039 [2024-11-28 18:27:41.077989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.039 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 Malloc1 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 [2024-11-28 18:27:41.201835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 Malloc2 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 Malloc3 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.297 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.555 Malloc4 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.555 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.556 Malloc5 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.556 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.814 Malloc6 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 Malloc7 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 Malloc8 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.815 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.073 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 Malloc9 00:25:30.074 18:27:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 Malloc10 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.074 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.332 Malloc11 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:30.332 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.333 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:30.899 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:30.899 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:30.899 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.899 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:30.899 18:27:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.800 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:33.735 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:33.735 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:33.735 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:33.735 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:33.735 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.636 18:27:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:36.572 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:36.572 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:36.572 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.572 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:36.572 18:27:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.472 18:27:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:39.039 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:39.039 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:39.039 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.039 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:39.039 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.570 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:41.829 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:41.829 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.829 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.829 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.829 18:27:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.358 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:44.617 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:44.617 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:44.617 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.617 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:44.617 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.145 18:27:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:47.712 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:47.712 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:47.712 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.712 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:47.712 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.670 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:50.607 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:50.607 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:50.607 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.607 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:50.607 18:28:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:52.507 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:53.443 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:53.443 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:53.443 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.443 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:53.443 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.974 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:56.540 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:56.541 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:56.541 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.541 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:56.541 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.442 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:59.378 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:59.378 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:59.378 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.378 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:59.378 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.281 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.540 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.540 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:01.540 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.540 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.540 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.540 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:01.540 [global] 00:26:01.540 thread=1 00:26:01.540 invalidate=1 00:26:01.540 rw=read 00:26:01.540 time_based=1 00:26:01.540 runtime=10 00:26:01.540 ioengine=libaio 00:26:01.540 direct=1 00:26:01.540 bs=262144 00:26:01.540 iodepth=64 00:26:01.540 norandommap=1 00:26:01.540 numjobs=1 00:26:01.540 00:26:01.540 [job0] 00:26:01.540 filename=/dev/nvme0n1 00:26:01.540 [job1] 00:26:01.540 filename=/dev/nvme10n1 00:26:01.540 [job2] 00:26:01.540 filename=/dev/nvme1n1 00:26:01.540 [job3] 00:26:01.540 filename=/dev/nvme2n1 00:26:01.540 [job4] 00:26:01.540 filename=/dev/nvme3n1 00:26:01.540 [job5] 00:26:01.540 filename=/dev/nvme4n1 00:26:01.540 [job6] 00:26:01.540 filename=/dev/nvme5n1 00:26:01.540 [job7] 00:26:01.540 filename=/dev/nvme6n1 00:26:01.540 [job8] 00:26:01.540 filename=/dev/nvme7n1 00:26:01.540 [job9] 00:26:01.540 filename=/dev/nvme8n1 00:26:01.540 [job10] 00:26:01.540 filename=/dev/nvme9n1 00:26:01.540 Could not set queue depth (nvme0n1) 00:26:01.540 Could not set queue depth (nvme10n1) 00:26:01.540 Could not set queue depth (nvme1n1) 00:26:01.540 Could not set queue depth (nvme2n1) 00:26:01.540 Could not set queue depth (nvme3n1) 00:26:01.540 Could not set queue depth (nvme4n1) 00:26:01.540 Could not set queue depth (nvme5n1) 00:26:01.540 Could not set queue depth (nvme6n1) 00:26:01.540 Could not set queue depth (nvme7n1) 00:26:01.540 Could not set queue depth (nvme8n1) 00:26:01.540 Could not set queue depth (nvme9n1) 00:26:01.798 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:01.798 fio-3.35 00:26:01.798 Starting 11 threads 00:26:14.012 00:26:14.012 job0: (groupid=0, jobs=1): err= 0: pid=2537683: Thu Nov 28 18:28:24 2024 00:26:14.012 read: IOPS=183, BW=45.8MiB/s (48.1MB/s)(464MiB/10116msec) 00:26:14.012 slat (usec): min=9, max=133895, avg=4529.72, stdev=17081.98 00:26:14.012 clat (msec): min=36, max=771, avg=344.23, stdev=126.47 00:26:14.012 lat (msec): min=37, max=771, avg=348.76, stdev=128.73 00:26:14.012 clat percentiles (msec): 00:26:14.012 | 1.00th=[ 52], 5.00th=[ 105], 10.00th=[ 167], 20.00th=[ 232], 00:26:14.012 | 30.00th=[ 288], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 393], 00:26:14.012 | 70.00th=[ 414], 80.00th=[ 447], 90.00th=[ 477], 95.00th=[ 502], 00:26:14.012 | 99.00th=[ 718], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 768], 00:26:14.012 | 99.99th=[ 768] 00:26:14.012 bw ( KiB/s): min=28614, max=77824, per=5.19%, avg=45872.30, stdev=13765.70, samples=20 00:26:14.012 iops : min= 111, max= 304, avg=179.15, stdev=53.82, samples=20 00:26:14.012 lat (msec) : 50=0.75%, 100=4.15%, 250=16.87%, 500=72.78%, 750=4.85% 00:26:14.012 lat (msec) : 1000=0.59% 00:26:14.012 cpu : usr=0.08%, sys=0.63%, ctx=259, majf=0, minf=4097 00:26:14.012 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:14.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.012 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.012 issued rwts: total=1855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.012 job1: (groupid=0, jobs=1): err= 0: pid=2537697: Thu Nov 28 18:28:24 2024 00:26:14.012 read: IOPS=301, BW=75.4MiB/s (79.0MB/s)(766MiB/10157msec) 00:26:14.012 slat (usec): min=13, max=150371, avg=3266.79, stdev=12102.65 00:26:14.012 clat (msec): min=33, max=648, avg=208.85, stdev=117.18 00:26:14.012 lat (msec): min=33, max=676, avg=212.12, stdev=119.02 00:26:14.012 clat percentiles (msec): 00:26:14.012 | 1.00th=[ 79], 5.00th=[ 107], 10.00th=[ 117], 20.00th=[ 131], 00:26:14.012 | 30.00th=[ 138], 40.00th=[ 150], 50.00th=[ 161], 60.00th=[ 176], 00:26:14.012 | 70.00th=[ 194], 80.00th=[ 300], 90.00th=[ 405], 95.00th=[ 472], 00:26:14.012 | 99.00th=[ 584], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:26:14.012 | 99.99th=[ 651] 00:26:14.012 bw ( KiB/s): min=32256, max=127488, per=8.68%, avg=76739.30, stdev=35278.54, samples=20 00:26:14.012 iops : min= 126, max= 498, avg=299.75, stdev=137.80, samples=20 00:26:14.012 lat (msec) : 50=0.39%, 100=2.58%, 250=71.62%, 500=21.78%, 750=3.63% 00:26:14.012 cpu : usr=0.19%, sys=1.01%, ctx=338, majf=0, minf=4098 00:26:14.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:14.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.012 issued rwts: total=3062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.012 job2: (groupid=0, jobs=1): err= 0: pid=2537698: Thu Nov 28 18:28:24 2024 00:26:14.012 read: IOPS=153, BW=38.4MiB/s (40.3MB/s)(391MiB/10163msec) 00:26:14.012 slat (usec): min=12, max=401718, avg=5613.73, stdev=22319.26 00:26:14.012 clat (msec): min=58, max=858, avg=410.15, stdev=126.37 00:26:14.012 lat (msec): min=58, max=858, avg=415.77, stdev=127.73 00:26:14.012 clat percentiles (msec): 00:26:14.012 | 1.00th=[ 75], 5.00th=[ 167], 10.00th=[ 234], 20.00th=[ 334], 00:26:14.013 | 30.00th=[ 368], 40.00th=[ 397], 50.00th=[ 418], 60.00th=[ 435], 00:26:14.013 | 70.00th=[ 468], 80.00th=[ 502], 90.00th=[ 550], 95.00th=[ 600], 00:26:14.013 | 99.00th=[ 751], 99.50th=[ 760], 99.90th=[ 860], 99.95th=[ 860], 00:26:14.013 | 99.99th=[ 860] 00:26:14.013 bw ( KiB/s): min=20992, max=67072, per=4.34%, avg=38370.90, stdev=10776.27, samples=20 00:26:14.013 iops : min= 82, max= 262, avg=149.85, stdev=42.11, samples=20 00:26:14.013 lat (msec) : 100=4.35%, 250=6.46%, 500=69.16%, 750=18.94%, 1000=1.09% 00:26:14.013 cpu : usr=0.10%, sys=0.55%, ctx=193, majf=0, minf=3721 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job3: (groupid=0, jobs=1): err= 0: pid=2537702: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=1015, BW=254MiB/s (266MB/s)(2547MiB/10029msec) 00:26:14.013 slat (usec): min=11, max=49377, avg=944.94, stdev=2596.16 00:26:14.013 clat (msec): min=3, max=304, avg=62.01, stdev=32.49 00:26:14.013 lat (msec): min=3, max=304, avg=62.95, stdev=32.91 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 41], 20.00th=[ 46], 00:26:14.013 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 56], 60.00th=[ 57], 00:26:14.013 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 129], 95.00th=[ 140], 00:26:14.013 | 99.00th=[ 155], 99.50th=[ 174], 99.90th=[ 279], 99.95th=[ 305], 00:26:14.013 | 99.99th=[ 305] 00:26:14.013 bw ( KiB/s): min=112128, max=401920, per=29.31%, avg=259171.95, stdev=89527.63, samples=20 00:26:14.013 iops : min= 438, max= 1570, avg=1012.35, stdev=349.71, samples=20 00:26:14.013 lat (msec) : 4=0.02%, 10=0.08%, 20=3.20%, 50=28.17%, 100=56.93% 00:26:14.013 lat (msec) : 250=11.26%, 500=0.34% 00:26:14.013 cpu : usr=0.68%, sys=3.51%, ctx=1886, majf=0, minf=4097 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=10188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job4: (groupid=0, jobs=1): err= 0: pid=2537703: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=190, BW=47.7MiB/s (50.1MB/s)(482MiB/10091msec) 00:26:14.013 slat (usec): min=14, max=157778, avg=4864.73, stdev=16284.05 00:26:14.013 clat (msec): min=67, max=561, avg=329.99, stdev=85.32 00:26:14.013 lat (msec): min=67, max=561, avg=334.85, stdev=86.54 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 132], 5.00th=[ 182], 10.00th=[ 201], 20.00th=[ 253], 00:26:14.013 | 30.00th=[ 300], 40.00th=[ 326], 50.00th=[ 342], 60.00th=[ 359], 00:26:14.013 | 70.00th=[ 372], 80.00th=[ 401], 90.00th=[ 430], 95.00th=[ 456], 00:26:14.013 | 99.00th=[ 527], 99.50th=[ 535], 99.90th=[ 558], 99.95th=[ 558], 00:26:14.013 | 99.99th=[ 558] 00:26:14.013 bw ( KiB/s): min=31232, max=91136, per=5.39%, avg=47687.40, stdev=13115.49, samples=20 00:26:14.013 iops : min= 122, max= 356, avg=186.25, stdev=51.22, samples=20 00:26:14.013 lat (msec) : 100=0.52%, 250=18.42%, 500=79.81%, 750=1.25% 00:26:14.013 cpu : usr=0.13%, sys=0.56%, ctx=268, majf=0, minf=4097 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job5: (groupid=0, jobs=1): err= 0: pid=2537704: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=164, BW=41.1MiB/s (43.1MB/s)(418MiB/10158msec) 00:26:14.013 slat (usec): min=13, max=248083, avg=5786.32, stdev=21364.58 00:26:14.013 clat (msec): min=47, max=663, avg=383.20, stdev=97.49 00:26:14.013 lat (msec): min=47, max=692, avg=388.99, stdev=99.33 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 81], 5.00th=[ 201], 10.00th=[ 288], 20.00th=[ 330], 00:26:14.013 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 380], 60.00th=[ 397], 00:26:14.013 | 70.00th=[ 422], 80.00th=[ 447], 90.00th=[ 489], 95.00th=[ 558], 00:26:14.013 | 99.00th=[ 651], 99.50th=[ 651], 99.90th=[ 667], 99.95th=[ 667], 00:26:14.013 | 99.99th=[ 667] 00:26:14.013 bw ( KiB/s): min=19968, max=55296, per=4.65%, avg=41109.05, stdev=7988.32, samples=20 00:26:14.013 iops : min= 78, max= 216, avg=160.55, stdev=31.19, samples=20 00:26:14.013 lat (msec) : 50=0.78%, 100=0.78%, 250=5.63%, 500=83.71%, 750=9.10% 00:26:14.013 cpu : usr=0.09%, sys=0.62%, ctx=191, majf=0, minf=4097 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job6: (groupid=0, jobs=1): err= 0: pid=2537705: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=177, BW=44.4MiB/s (46.6MB/s)(448MiB/10092msec) 00:26:14.013 slat (usec): min=13, max=183254, avg=5582.29, stdev=19197.16 00:26:14.013 clat (msec): min=72, max=599, avg=354.39, stdev=87.09 00:26:14.013 lat (msec): min=110, max=646, avg=359.97, stdev=89.03 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 113], 5.00th=[ 209], 10.00th=[ 243], 20.00th=[ 292], 00:26:14.013 | 30.00th=[ 317], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 380], 00:26:14.013 | 70.00th=[ 401], 80.00th=[ 422], 90.00th=[ 460], 95.00th=[ 477], 00:26:14.013 | 99.00th=[ 535], 99.50th=[ 558], 99.90th=[ 600], 99.95th=[ 600], 00:26:14.013 | 99.99th=[ 600] 00:26:14.013 bw ( KiB/s): min=32256, max=59904, per=5.00%, avg=44258.15, stdev=8042.53, samples=20 00:26:14.013 iops : min= 126, max= 234, avg=172.85, stdev=31.42, samples=20 00:26:14.013 lat (msec) : 100=0.06%, 250=12.88%, 500=84.38%, 750=2.68% 00:26:14.013 cpu : usr=0.13%, sys=0.62%, ctx=192, majf=0, minf=4097 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=1793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job7: (groupid=0, jobs=1): err= 0: pid=2537706: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=266, BW=66.6MiB/s (69.9MB/s)(677MiB/10158msec) 00:26:14.013 slat (usec): min=12, max=250402, avg=3706.42, stdev=15054.35 00:26:14.013 clat (msec): min=25, max=695, avg=236.28, stdev=131.56 00:26:14.013 lat (msec): min=25, max=695, avg=239.99, stdev=133.62 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 34], 5.00th=[ 120], 10.00th=[ 136], 20.00th=[ 150], 00:26:14.013 | 30.00th=[ 161], 40.00th=[ 171], 50.00th=[ 184], 60.00th=[ 199], 00:26:14.013 | 70.00th=[ 230], 80.00th=[ 313], 90.00th=[ 481], 95.00th=[ 542], 00:26:14.013 | 99.00th=[ 609], 99.50th=[ 642], 99.90th=[ 693], 99.95th=[ 693], 00:26:14.013 | 99.99th=[ 693] 00:26:14.013 bw ( KiB/s): min=28672, max=110080, per=7.65%, avg=67651.30, stdev=30612.80, samples=20 00:26:14.013 iops : min= 112, max= 430, avg=264.25, stdev=119.57, samples=20 00:26:14.013 lat (msec) : 50=1.11%, 100=1.33%, 250=69.89%, 500=19.10%, 750=8.57% 00:26:14.013 cpu : usr=0.09%, sys=0.98%, ctx=310, majf=0, minf=4098 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=2707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job8: (groupid=0, jobs=1): err= 0: pid=2537707: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=545, BW=136MiB/s (143MB/s)(1387MiB/10160msec) 00:26:14.013 slat (usec): min=8, max=396576, avg=1582.62, stdev=9928.21 00:26:14.013 clat (msec): min=22, max=914, avg=115.53, stdev=140.22 00:26:14.013 lat (msec): min=22, max=914, avg=117.12, stdev=141.75 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 43], 00:26:14.013 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 50], 60.00th=[ 52], 00:26:14.013 | 70.00th=[ 56], 80.00th=[ 190], 90.00th=[ 368], 95.00th=[ 405], 00:26:14.013 | 99.00th=[ 493], 99.50th=[ 911], 99.90th=[ 911], 99.95th=[ 911], 00:26:14.013 | 99.99th=[ 919] 00:26:14.013 bw ( KiB/s): min=15360, max=377856, per=15.87%, avg=140359.75, stdev=137417.06, samples=20 00:26:14.013 iops : min= 60, max= 1476, avg=548.25, stdev=536.81, samples=20 00:26:14.013 lat (msec) : 50=53.78%, 100=23.44%, 250=4.34%, 500=17.63%, 750=0.22% 00:26:14.013 lat (msec) : 1000=0.59% 00:26:14.013 cpu : usr=0.23%, sys=1.72%, ctx=529, majf=0, minf=4098 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:14.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.013 issued rwts: total=5547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.013 job9: (groupid=0, jobs=1): err= 0: pid=2537710: Thu Nov 28 18:28:24 2024 00:26:14.013 read: IOPS=308, BW=77.0MiB/s (80.8MB/s)(782MiB/10154msec) 00:26:14.013 slat (usec): min=13, max=298631, avg=3019.19, stdev=13580.47 00:26:14.013 clat (msec): min=5, max=634, avg=204.52, stdev=128.66 00:26:14.013 lat (msec): min=5, max=635, avg=207.54, stdev=130.80 00:26:14.013 clat percentiles (msec): 00:26:14.013 | 1.00th=[ 10], 5.00th=[ 43], 10.00th=[ 66], 20.00th=[ 77], 00:26:14.013 | 30.00th=[ 132], 40.00th=[ 138], 50.00th=[ 144], 60.00th=[ 190], 00:26:14.013 | 70.00th=[ 309], 80.00th=[ 359], 90.00th=[ 393], 95.00th=[ 422], 00:26:14.013 | 99.00th=[ 464], 99.50th=[ 477], 99.90th=[ 542], 99.95th=[ 634], 00:26:14.013 | 99.99th=[ 634] 00:26:14.013 bw ( KiB/s): min=31294, max=200192, per=8.87%, avg=78462.20, stdev=48329.08, samples=20 00:26:14.013 iops : min= 122, max= 782, avg=306.45, stdev=188.82, samples=20 00:26:14.013 lat (msec) : 10=1.02%, 20=0.61%, 50=6.71%, 100=12.94%, 250=43.85% 00:26:14.013 lat (msec) : 500=34.52%, 750=0.35% 00:26:14.013 cpu : usr=0.27%, sys=1.12%, ctx=589, majf=0, minf=4097 00:26:14.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:14.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.014 issued rwts: total=3129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.014 job10: (groupid=0, jobs=1): err= 0: pid=2537712: Thu Nov 28 18:28:24 2024 00:26:14.014 read: IOPS=165, BW=41.3MiB/s (43.3MB/s)(417MiB/10093msec) 00:26:14.014 slat (usec): min=13, max=232301, avg=6002.81, stdev=20785.10 00:26:14.014 clat (msec): min=38, max=689, avg=381.23, stdev=99.18 00:26:14.014 lat (msec): min=38, max=689, avg=387.24, stdev=100.78 00:26:14.014 clat percentiles (msec): 00:26:14.014 | 1.00th=[ 42], 5.00th=[ 224], 10.00th=[ 255], 20.00th=[ 338], 00:26:14.014 | 30.00th=[ 351], 40.00th=[ 368], 50.00th=[ 388], 60.00th=[ 401], 00:26:14.014 | 70.00th=[ 422], 80.00th=[ 443], 90.00th=[ 481], 95.00th=[ 535], 00:26:14.014 | 99.00th=[ 609], 99.50th=[ 693], 99.90th=[ 693], 99.95th=[ 693], 00:26:14.014 | 99.99th=[ 693] 00:26:14.014 bw ( KiB/s): min=18944, max=62976, per=4.64%, avg=41032.70, stdev=8173.98, samples=20 00:26:14.014 iops : min= 74, max= 246, avg=160.25, stdev=31.93, samples=20 00:26:14.014 lat (msec) : 50=2.64%, 100=0.30%, 250=6.54%, 500=83.86%, 750=6.66% 00:26:14.014 cpu : usr=0.08%, sys=0.64%, ctx=194, majf=0, minf=4098 00:26:14.014 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:14.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.014 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:14.014 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:14.014 00:26:14.014 Run status group 0 (all jobs): 00:26:14.014 READ: bw=864MiB/s (906MB/s), 38.4MiB/s-254MiB/s (40.3MB/s-266MB/s), io=8777MiB (9203MB), run=10029-10163msec 00:26:14.014 00:26:14.014 Disk stats (read/write): 00:26:14.014 nvme0n1: ios=3497/0, merge=0/0, ticks=1231055/0, in_queue=1231055, util=97.12% 00:26:14.014 nvme10n1: ios=5989/0, merge=0/0, ticks=1217898/0, in_queue=1217898, util=97.34% 00:26:14.014 nvme1n1: ios=2979/0, merge=0/0, ticks=1211097/0, in_queue=1211097, util=97.62% 00:26:14.014 nvme2n1: ios=20041/0, merge=0/0, ticks=1235265/0, in_queue=1235265, util=97.77% 00:26:14.014 nvme3n1: ios=3678/0, merge=0/0, ticks=1225622/0, in_queue=1225622, util=97.86% 00:26:14.014 nvme4n1: ios=3200/0, merge=0/0, ticks=1214610/0, in_queue=1214610, util=98.21% 00:26:14.014 nvme5n1: ios=3406/0, merge=0/0, ticks=1225549/0, in_queue=1225549, util=98.38% 00:26:14.014 nvme6n1: ios=5253/0, merge=0/0, ticks=1206342/0, in_queue=1206342, util=98.50% 00:26:14.014 nvme7n1: ios=10966/0, merge=0/0, ticks=1189756/0, in_queue=1189756, util=98.92% 00:26:14.014 nvme8n1: ios=6095/0, merge=0/0, ticks=1240404/0, in_queue=1240404, util=99.14% 00:26:14.014 nvme9n1: ios=3206/0, merge=0/0, ticks=1221404/0, in_queue=1221404, util=99.29% 00:26:14.014 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:14.014 [global] 00:26:14.014 thread=1 00:26:14.014 invalidate=1 00:26:14.014 rw=randwrite 00:26:14.014 time_based=1 00:26:14.014 runtime=10 00:26:14.014 ioengine=libaio 00:26:14.014 direct=1 00:26:14.014 bs=262144 00:26:14.014 iodepth=64 00:26:14.014 norandommap=1 00:26:14.014 numjobs=1 00:26:14.014 00:26:14.014 [job0] 00:26:14.014 filename=/dev/nvme0n1 00:26:14.014 [job1] 00:26:14.014 filename=/dev/nvme10n1 00:26:14.014 [job2] 00:26:14.014 filename=/dev/nvme1n1 00:26:14.014 [job3] 00:26:14.014 filename=/dev/nvme2n1 00:26:14.014 [job4] 00:26:14.014 filename=/dev/nvme3n1 00:26:14.014 [job5] 00:26:14.014 filename=/dev/nvme4n1 00:26:14.014 [job6] 00:26:14.014 filename=/dev/nvme5n1 00:26:14.014 [job7] 00:26:14.014 filename=/dev/nvme6n1 00:26:14.014 [job8] 00:26:14.014 filename=/dev/nvme7n1 00:26:14.014 [job9] 00:26:14.014 filename=/dev/nvme8n1 00:26:14.014 [job10] 00:26:14.014 filename=/dev/nvme9n1 00:26:14.014 Could not set queue depth (nvme0n1) 00:26:14.014 Could not set queue depth (nvme10n1) 00:26:14.014 Could not set queue depth (nvme1n1) 00:26:14.014 Could not set queue depth (nvme2n1) 00:26:14.014 Could not set queue depth (nvme3n1) 00:26:14.014 Could not set queue depth (nvme4n1) 00:26:14.014 Could not set queue depth (nvme5n1) 00:26:14.014 Could not set queue depth (nvme6n1) 00:26:14.014 Could not set queue depth (nvme7n1) 00:26:14.014 Could not set queue depth (nvme8n1) 00:26:14.014 Could not set queue depth (nvme9n1) 00:26:14.014 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:14.014 fio-3.35 00:26:14.014 Starting 11 threads 00:26:23.988 00:26:23.988 job0: (groupid=0, jobs=1): err= 0: pid=2538442: Thu Nov 28 18:28:35 2024 00:26:23.988 write: IOPS=257, BW=64.5MiB/s (67.6MB/s)(653MiB/10130msec); 0 zone resets 00:26:23.988 slat (usec): min=18, max=76339, avg=1996.82, stdev=7203.99 00:26:23.988 clat (usec): min=1835, max=668514, avg=246013.84, stdev=166118.78 00:26:23.988 lat (usec): min=1896, max=668570, avg=248010.66, stdev=168198.89 00:26:23.988 clat percentiles (msec): 00:26:23.988 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 50], 20.00th=[ 103], 00:26:23.988 | 30.00th=[ 142], 40.00th=[ 169], 50.00th=[ 228], 60.00th=[ 271], 00:26:23.988 | 70.00th=[ 296], 80.00th=[ 372], 90.00th=[ 535], 95.00th=[ 575], 00:26:23.989 | 99.00th=[ 625], 99.50th=[ 642], 99.90th=[ 667], 99.95th=[ 667], 00:26:23.989 | 99.99th=[ 667] 00:26:23.989 bw ( KiB/s): min=26624, max=113152, per=7.69%, avg=65272.90, stdev=26599.44, samples=20 00:26:23.989 iops : min= 104, max= 442, avg=254.95, stdev=103.90, samples=20 00:26:23.989 lat (msec) : 2=0.04%, 4=0.34%, 10=6.24%, 20=0.84%, 50=2.79% 00:26:23.989 lat (msec) : 100=9.41%, 250=35.17%, 500=32.84%, 750=12.32% 00:26:23.989 cpu : usr=0.77%, sys=1.03%, ctx=1830, majf=0, minf=1 00:26:23.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.989 issued rwts: total=0,2613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.989 job1: (groupid=0, jobs=1): err= 0: pid=2538454: Thu Nov 28 18:28:35 2024 00:26:23.989 write: IOPS=370, BW=92.6MiB/s (97.1MB/s)(940MiB/10149msec); 0 zone resets 00:26:23.989 slat (usec): min=24, max=70149, avg=2375.65, stdev=5260.99 00:26:23.989 clat (usec): min=1968, max=548579, avg=170213.90, stdev=84060.20 00:26:23.989 lat (msec): min=2, max=548, avg=172.59, stdev=85.22 00:26:23.989 clat percentiles (msec): 00:26:23.989 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 77], 20.00th=[ 100], 00:26:23.989 | 30.00th=[ 117], 40.00th=[ 129], 50.00th=[ 163], 60.00th=[ 192], 00:26:23.989 | 70.00th=[ 218], 80.00th=[ 236], 90.00th=[ 279], 95.00th=[ 317], 00:26:23.989 | 99.00th=[ 414], 99.50th=[ 443], 99.90th=[ 523], 99.95th=[ 550], 00:26:23.989 | 99.99th=[ 550] 00:26:23.989 bw ( KiB/s): min=38912, max=181760, per=11.16%, avg=94642.10, stdev=37837.69, samples=20 00:26:23.989 iops : min= 152, max= 710, avg=369.65, stdev=147.84, samples=20 00:26:23.989 lat (msec) : 2=0.03%, 4=0.11%, 10=1.04%, 20=1.52%, 50=2.87% 00:26:23.989 lat (msec) : 100=14.71%, 250=63.62%, 500=15.93%, 750=0.19% 00:26:23.989 cpu : usr=1.07%, sys=1.30%, ctx=1386, majf=0, minf=1 00:26:23.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.989 issued rwts: total=0,3760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.989 job2: (groupid=0, jobs=1): err= 0: pid=2538455: Thu Nov 28 18:28:35 2024 00:26:23.989 write: IOPS=362, BW=90.5MiB/s (94.9MB/s)(919MiB/10147msec); 0 zone resets 00:26:23.989 slat (usec): min=23, max=155672, avg=1956.36, stdev=5366.91 00:26:23.989 clat (msec): min=6, max=556, avg=174.47, stdev=84.46 00:26:23.989 lat (msec): min=6, max=557, avg=176.43, stdev=85.34 00:26:23.989 clat percentiles (msec): 00:26:23.989 | 1.00th=[ 23], 5.00th=[ 44], 10.00th=[ 77], 20.00th=[ 118], 00:26:23.989 | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 157], 60.00th=[ 190], 00:26:23.989 | 70.00th=[ 213], 80.00th=[ 230], 90.00th=[ 292], 95.00th=[ 334], 00:26:23.989 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 531], 99.95th=[ 531], 00:26:23.989 | 99.99th=[ 558] 00:26:23.989 bw ( KiB/s): min=37888, max=153088, per=10.90%, avg=92459.80, stdev=33527.62, samples=20 00:26:23.989 iops : min= 148, max= 598, avg=361.15, stdev=130.98, samples=20 00:26:23.989 lat (msec) : 10=0.05%, 20=0.19%, 50=5.85%, 100=8.03%, 250=69.63% 00:26:23.989 lat (msec) : 500=16.00%, 750=0.24% 00:26:23.989 cpu : usr=1.11%, sys=1.19%, ctx=1778, majf=0, minf=1 00:26:23.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.989 issued rwts: total=0,3675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.989 job3: (groupid=0, jobs=1): err= 0: pid=2538456: Thu Nov 28 18:28:35 2024 00:26:23.989 write: IOPS=216, BW=54.1MiB/s (56.8MB/s)(548MiB/10119msec); 0 zone resets 00:26:23.989 slat (usec): min=14, max=47577, avg=3745.00, stdev=8985.45 00:26:23.989 clat (msec): min=2, max=665, avg=291.71, stdev=157.89 00:26:23.989 lat (msec): min=2, max=665, avg=295.45, stdev=160.04 00:26:23.989 clat percentiles (msec): 00:26:23.989 | 1.00th=[ 10], 5.00th=[ 62], 10.00th=[ 69], 20.00th=[ 157], 00:26:23.989 | 30.00th=[ 218], 40.00th=[ 253], 50.00th=[ 279], 60.00th=[ 296], 00:26:23.989 | 70.00th=[ 347], 80.00th=[ 430], 90.00th=[ 550], 95.00th=[ 584], 00:26:23.989 | 99.00th=[ 642], 99.50th=[ 651], 99.90th=[ 659], 99.95th=[ 667], 00:26:23.989 | 99.99th=[ 667] 00:26:23.989 bw ( KiB/s): min=26624, max=130821, per=6.42%, avg=54486.55, stdev=27226.48, samples=20 00:26:23.989 iops : min= 104, max= 511, avg=212.80, stdev=106.38, samples=20 00:26:23.989 lat (msec) : 4=0.09%, 10=0.91%, 20=0.78%, 50=1.87%, 100=11.00% 00:26:23.989 lat (msec) : 250=24.42%, 500=47.42%, 750=13.51% 00:26:23.989 cpu : usr=0.68%, sys=0.60%, ctx=951, majf=0, minf=2 00:26:23.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.989 issued rwts: total=0,2191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.989 job4: (groupid=0, jobs=1): err= 0: pid=2538457: Thu Nov 28 18:28:35 2024 00:26:23.989 write: IOPS=202, BW=50.6MiB/s (53.0MB/s)(513MiB/10147msec); 0 zone resets 00:26:23.989 slat (usec): min=15, max=48031, avg=4138.34, stdev=9389.91 00:26:23.989 clat (msec): min=8, max=641, avg=312.03, stdev=142.96 00:26:23.989 lat (msec): min=13, max=641, avg=316.17, stdev=144.96 00:26:23.989 clat percentiles (msec): 00:26:23.989 | 1.00th=[ 26], 5.00th=[ 80], 10.00th=[ 155], 20.00th=[ 209], 00:26:23.989 | 30.00th=[ 249], 40.00th=[ 271], 50.00th=[ 284], 60.00th=[ 309], 00:26:23.989 | 70.00th=[ 351], 80.00th=[ 426], 90.00th=[ 558], 95.00th=[ 592], 00:26:23.989 | 99.00th=[ 642], 99.50th=[ 642], 99.90th=[ 642], 99.95th=[ 642], 00:26:23.989 | 99.99th=[ 642] 00:26:23.989 bw ( KiB/s): min=24576, max=94396, per=6.01%, avg=50950.30, stdev=19888.99, samples=20 00:26:23.989 iops : min= 96, max= 368, avg=198.95, stdev=77.65, samples=20 00:26:23.989 lat (msec) : 10=0.05%, 20=0.49%, 50=2.63%, 100=3.41%, 250=23.72% 00:26:23.989 lat (msec) : 500=55.33%, 750=14.37% 00:26:23.989 cpu : usr=0.70%, sys=0.68%, ctx=830, majf=0, minf=1 00:26:23.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.989 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.989 issued rwts: total=0,2053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.989 job5: (groupid=0, jobs=1): err= 0: pid=2538458: Thu Nov 28 18:28:35 2024 00:26:23.989 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(943MiB/10162msec); 0 zone resets 00:26:23.989 slat (usec): min=22, max=283969, avg=1882.68, stdev=7327.82 00:26:23.989 clat (usec): min=1082, max=588322, avg=169906.95, stdev=93354.19 00:26:23.989 lat (usec): min=1124, max=588360, avg=171789.63, stdev=94011.34 00:26:23.989 clat percentiles (msec): 00:26:23.989 | 1.00th=[ 5], 5.00th=[ 25], 10.00th=[ 72], 20.00th=[ 108], 00:26:23.989 | 30.00th=[ 122], 40.00th=[ 128], 50.00th=[ 136], 60.00th=[ 182], 00:26:23.989 | 70.00th=[ 215], 80.00th=[ 239], 90.00th=[ 292], 95.00th=[ 342], 00:26:23.989 | 99.00th=[ 464], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 592], 00:26:23.989 | 99.99th=[ 592] 00:26:23.989 bw ( KiB/s): min=37376, max=138752, per=11.19%, avg=94950.95, stdev=28309.33, samples=20 00:26:23.989 iops : min= 146, max= 542, avg=370.85, stdev=110.62, samples=20 00:26:23.989 lat (msec) : 2=0.13%, 4=0.66%, 10=2.15%, 20=1.41%, 50=3.63% 00:26:23.989 lat (msec) : 100=7.45%, 250=68.03%, 500=16.04%, 750=0.50% 00:26:23.989 cpu : usr=1.07%, sys=1.15%, ctx=1805, majf=0, minf=1 00:26:23.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:23.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.989 issued rwts: total=0,3772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.989 job6: (groupid=0, jobs=1): err= 0: pid=2538459: Thu Nov 28 18:28:35 2024 00:26:23.990 write: IOPS=305, BW=76.4MiB/s (80.1MB/s)(775MiB/10148msec); 0 zone resets 00:26:23.990 slat (usec): min=14, max=74859, avg=2397.75, stdev=7228.83 00:26:23.990 clat (usec): min=1827, max=687836, avg=207009.81, stdev=164824.46 00:26:23.990 lat (usec): min=1907, max=687870, avg=209407.56, stdev=167013.51 00:26:23.990 clat percentiles (msec): 00:26:23.990 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 24], 20.00th=[ 55], 00:26:23.990 | 30.00th=[ 69], 40.00th=[ 153], 50.00th=[ 186], 60.00th=[ 230], 00:26:23.990 | 70.00th=[ 271], 80.00th=[ 309], 90.00th=[ 468], 95.00th=[ 567], 00:26:23.990 | 99.00th=[ 659], 99.50th=[ 676], 99.90th=[ 676], 99.95th=[ 676], 00:26:23.990 | 99.99th=[ 684] 00:26:23.990 bw ( KiB/s): min=22528, max=291257, per=9.16%, avg=77723.20, stdev=64708.36, samples=20 00:26:23.990 iops : min= 88, max= 1137, avg=303.55, stdev=252.65, samples=20 00:26:23.990 lat (msec) : 2=0.10%, 4=0.45%, 10=4.35%, 20=3.94%, 50=9.84% 00:26:23.990 lat (msec) : 100=15.48%, 250=31.03%, 500=25.74%, 750=9.06% 00:26:23.990 cpu : usr=0.76%, sys=1.04%, ctx=1744, majf=0, minf=1 00:26:23.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:23.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.990 issued rwts: total=0,3100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.990 job7: (groupid=0, jobs=1): err= 0: pid=2538460: Thu Nov 28 18:28:35 2024 00:26:23.990 write: IOPS=240, BW=60.0MiB/s (62.9MB/s)(611MiB/10178msec); 0 zone resets 00:26:23.990 slat (usec): min=24, max=91874, avg=2722.90, stdev=8434.20 00:26:23.990 clat (usec): min=1030, max=642323, avg=263680.47, stdev=167911.79 00:26:23.990 lat (usec): min=1106, max=642382, avg=266403.37, stdev=170509.00 00:26:23.990 clat percentiles (msec): 00:26:23.990 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 96], 00:26:23.990 | 30.00th=[ 157], 40.00th=[ 211], 50.00th=[ 264], 60.00th=[ 309], 00:26:23.990 | 70.00th=[ 347], 80.00th=[ 384], 90.00th=[ 535], 95.00th=[ 584], 00:26:23.990 | 99.00th=[ 634], 99.50th=[ 642], 99.90th=[ 642], 99.95th=[ 642], 00:26:23.990 | 99.99th=[ 642] 00:26:23.990 bw ( KiB/s): min=24576, max=114688, per=7.18%, avg=60945.40, stdev=30147.15, samples=20 00:26:23.990 iops : min= 96, max= 448, avg=238.05, stdev=117.75, samples=20 00:26:23.990 lat (msec) : 2=0.29%, 4=1.06%, 10=1.27%, 20=1.76%, 50=8.72% 00:26:23.990 lat (msec) : 100=7.57%, 250=26.35%, 500=41.61%, 750=11.37% 00:26:23.990 cpu : usr=0.74%, sys=0.82%, ctx=1586, majf=0, minf=1 00:26:23.990 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:23.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.990 issued rwts: total=0,2444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.990 job8: (groupid=0, jobs=1): err= 0: pid=2538461: Thu Nov 28 18:28:35 2024 00:26:23.990 write: IOPS=417, BW=104MiB/s (109MB/s)(1064MiB/10205msec); 0 zone resets 00:26:23.990 slat (usec): min=16, max=70254, avg=1500.39, stdev=5133.01 00:26:23.990 clat (usec): min=1314, max=688714, avg=151867.84, stdev=139794.04 00:26:23.990 lat (usec): min=1361, max=711747, avg=153368.23, stdev=141115.79 00:26:23.990 clat percentiles (msec): 00:26:23.990 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 34], 20.00th=[ 56], 00:26:23.990 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 102], 60.00th=[ 123], 00:26:23.990 | 70.00th=[ 155], 80.00th=[ 234], 90.00th=[ 355], 95.00th=[ 472], 00:26:23.990 | 99.00th=[ 651], 99.50th=[ 659], 99.90th=[ 667], 99.95th=[ 684], 00:26:23.990 | 99.99th=[ 693] 00:26:23.990 bw ( KiB/s): min=27648, max=221696, per=12.65%, avg=107325.35, stdev=60242.31, samples=20 00:26:23.990 iops : min= 108, max= 866, avg=419.20, stdev=235.31, samples=20 00:26:23.990 lat (msec) : 2=0.14%, 4=0.42%, 10=1.15%, 20=3.83%, 50=11.09% 00:26:23.990 lat (msec) : 100=32.26%, 250=32.42%, 500=14.33%, 750=4.35% 00:26:23.990 cpu : usr=1.26%, sys=1.44%, ctx=2565, majf=0, minf=2 00:26:23.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:23.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.990 issued rwts: total=0,4256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.990 job9: (groupid=0, jobs=1): err= 0: pid=2538462: Thu Nov 28 18:28:35 2024 00:26:23.990 write: IOPS=244, BW=61.2MiB/s (64.2MB/s)(625MiB/10204msec); 0 zone resets 00:26:23.990 slat (usec): min=15, max=196553, avg=3150.63, stdev=9955.75 00:26:23.990 clat (usec): min=1158, max=818654, avg=258125.31, stdev=175743.30 00:26:23.990 lat (usec): min=1184, max=818683, avg=261275.94, stdev=177823.47 00:26:23.990 clat percentiles (msec): 00:26:23.990 | 1.00th=[ 6], 5.00th=[ 19], 10.00th=[ 42], 20.00th=[ 117], 00:26:23.990 | 30.00th=[ 161], 40.00th=[ 184], 50.00th=[ 230], 60.00th=[ 279], 00:26:23.990 | 70.00th=[ 326], 80.00th=[ 359], 90.00th=[ 502], 95.00th=[ 667], 00:26:23.990 | 99.00th=[ 776], 99.50th=[ 785], 99.90th=[ 818], 99.95th=[ 818], 00:26:23.990 | 99.99th=[ 818] 00:26:23.990 bw ( KiB/s): min=16384, max=127488, per=7.35%, avg=62338.00, stdev=29528.29, samples=20 00:26:23.990 iops : min= 64, max= 498, avg=243.45, stdev=115.40, samples=20 00:26:23.990 lat (msec) : 2=0.44%, 4=0.32%, 10=1.48%, 20=3.08%, 50=5.52% 00:26:23.990 lat (msec) : 100=6.69%, 250=37.23%, 500=35.11%, 750=8.89%, 1000=1.24% 00:26:23.990 cpu : usr=0.71%, sys=0.89%, ctx=1292, majf=0, minf=1 00:26:23.990 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:23.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.990 issued rwts: total=0,2498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.990 job10: (groupid=0, jobs=1): err= 0: pid=2538463: Thu Nov 28 18:28:35 2024 00:26:23.990 write: IOPS=341, BW=85.3MiB/s (89.5MB/s)(864MiB/10118msec); 0 zone resets 00:26:23.990 slat (usec): min=21, max=213229, avg=1666.98, stdev=6819.70 00:26:23.990 clat (usec): min=1123, max=783210, avg=185728.57, stdev=163356.56 00:26:23.990 lat (usec): min=1163, max=783320, avg=187395.55, stdev=164899.54 00:26:23.990 clat percentiles (msec): 00:26:23.990 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 36], 20.00th=[ 61], 00:26:23.990 | 30.00th=[ 66], 40.00th=[ 78], 50.00th=[ 142], 60.00th=[ 180], 00:26:23.990 | 70.00th=[ 247], 80.00th=[ 309], 90.00th=[ 401], 95.00th=[ 584], 00:26:23.990 | 99.00th=[ 667], 99.50th=[ 709], 99.90th=[ 768], 99.95th=[ 785], 00:26:23.990 | 99.99th=[ 785] 00:26:23.990 bw ( KiB/s): min=24576, max=259584, per=10.23%, avg=86817.00, stdev=65189.42, samples=20 00:26:23.990 iops : min= 96, max= 1014, avg=339.05, stdev=254.67, samples=20 00:26:23.990 lat (msec) : 2=0.14%, 4=0.64%, 10=1.42%, 20=3.85%, 50=9.99% 00:26:23.990 lat (msec) : 100=27.30%, 250=27.13%, 500=22.93%, 750=6.34%, 1000=0.26% 00:26:23.990 cpu : usr=0.99%, sys=1.14%, ctx=2086, majf=0, minf=1 00:26:23.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:23.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:23.990 issued rwts: total=0,3454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:23.990 00:26:23.990 Run status group 0 (all jobs): 00:26:23.990 WRITE: bw=828MiB/s (869MB/s), 50.6MiB/s-104MiB/s (53.0MB/s-109MB/s), io=8454MiB (8865MB), run=10118-10205msec 00:26:23.990 00:26:23.990 Disk stats (read/write): 00:26:23.990 nvme0n1: ios=49/5014, merge=0/0, ticks=85/1216306, in_queue=1216391, util=97.71% 00:26:23.990 nvme10n1: ios=47/7372, merge=0/0, ticks=2003/1192593, in_queue=1194596, util=100.00% 00:26:23.990 nvme1n1: ios=38/7202, merge=0/0, ticks=1469/1200391, in_queue=1201860, util=100.00% 00:26:23.990 nvme2n1: ios=0/4179, merge=0/0, ticks=0/1213482, in_queue=1213482, util=97.78% 00:26:23.991 nvme3n1: ios=0/3944, merge=0/0, ticks=0/1204507, in_queue=1204507, util=97.87% 00:26:23.991 nvme4n1: ios=45/7401, merge=0/0, ticks=5063/1151426, in_queue=1156489, util=100.00% 00:26:23.991 nvme5n1: ios=0/6036, merge=0/0, ticks=0/1209027, in_queue=1209027, util=98.38% 00:26:23.991 nvme6n1: ios=44/4882, merge=0/0, ticks=161/1248343, in_queue=1248504, util=99.56% 00:26:23.991 nvme7n1: ios=0/8482, merge=0/0, ticks=0/1250067, in_queue=1250067, util=98.88% 00:26:23.991 nvme8n1: ios=0/4968, merge=0/0, ticks=0/1241929, in_queue=1241929, util=99.05% 00:26:23.991 nvme9n1: ios=0/6723, merge=0/0, ticks=0/1227131, in_queue=1227131, util=99.12% 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:23.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:23.991 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.991 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:24.248 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.248 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:24.506 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.506 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:25.073 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.073 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:25.073 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:25.073 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:25.073 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.073 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:25.073 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.332 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:25.590 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.590 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:25.849 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:25.849 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:26.107 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:26.107 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:26.108 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:26.108 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.108 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.108 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.108 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.108 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:26.366 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.366 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:26.624 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:26.624 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.625 rmmod nvme_tcp 00:26:26.625 rmmod nvme_fabrics 00:26:26.625 rmmod nvme_keyring 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 2533167 ']' 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 2533167 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 2533167 ']' 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 2533167 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2533167 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2533167' 00:26:26.625 killing process with pid 2533167 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 2533167 00:26:26.625 18:28:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 2533167 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.908 18:28:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:31.810 00:26:31.810 real 1m6.016s 00:26:31.810 user 3m52.776s 00:26:31.810 sys 0m16.731s 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.810 ************************************ 00:26:31.810 END TEST nvmf_multiconnection 00:26:31.810 ************************************ 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:31.810 ************************************ 00:26:31.810 START TEST nvmf_initiator_timeout 00:26:31.810 ************************************ 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:31.810 * Looking for test storage... 00:26:31.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.810 --rc genhtml_branch_coverage=1 00:26:31.810 --rc genhtml_function_coverage=1 00:26:31.810 --rc genhtml_legend=1 00:26:31.810 --rc geninfo_all_blocks=1 00:26:31.810 --rc geninfo_unexecuted_blocks=1 00:26:31.810 00:26:31.810 ' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.810 --rc genhtml_branch_coverage=1 00:26:31.810 --rc genhtml_function_coverage=1 00:26:31.810 --rc genhtml_legend=1 00:26:31.810 --rc geninfo_all_blocks=1 00:26:31.810 --rc geninfo_unexecuted_blocks=1 00:26:31.810 00:26:31.810 ' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.810 --rc genhtml_branch_coverage=1 00:26:31.810 --rc genhtml_function_coverage=1 00:26:31.810 --rc genhtml_legend=1 00:26:31.810 --rc geninfo_all_blocks=1 00:26:31.810 --rc geninfo_unexecuted_blocks=1 00:26:31.810 00:26:31.810 ' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.810 --rc genhtml_branch_coverage=1 00:26:31.810 --rc genhtml_function_coverage=1 00:26:31.810 --rc genhtml_legend=1 00:26:31.810 --rc geninfo_all_blocks=1 00:26:31.810 --rc geninfo_unexecuted_blocks=1 00:26:31.810 00:26:31.810 ' 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.810 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:31.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:31.811 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:33.715 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:33.715 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:33.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:33.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:33.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:33.716 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:33.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:33.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:26:33.975 00:26:33.975 --- 10.0.0.2 ping statistics --- 00:26:33.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.975 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:33.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:33.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:26:33.975 00:26:33.975 --- 10.0.0.1 ping statistics --- 00:26:33.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:33.975 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:33.975 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=2542041 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 2542041 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 2542041 ']' 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.976 18:28:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.976 [2024-11-28 18:28:46.025140] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:26:33.976 [2024-11-28 18:28:46.025295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.234 [2024-11-28 18:28:46.170870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:34.234 [2024-11-28 18:28:46.306961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.234 [2024-11-28 18:28:46.307041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.234 [2024-11-28 18:28:46.307066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.234 [2024-11-28 18:28:46.307104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.234 [2024-11-28 18:28:46.307126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.234 [2024-11-28 18:28:46.309909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.234 [2024-11-28 18:28:46.309980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:34.234 [2024-11-28 18:28:46.310070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.234 [2024-11-28 18:28:46.310075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.169 Malloc0 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.169 Delay0 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.169 [2024-11-28 18:28:47.188350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.169 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:35.170 [2024-11-28 18:28:47.217990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.170 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:36.105 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:36.105 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:36.105 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.105 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:36.105 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2542482 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:38.006 18:28:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:38.006 [global] 00:26:38.006 thread=1 00:26:38.006 invalidate=1 00:26:38.006 rw=write 00:26:38.006 time_based=1 00:26:38.006 runtime=60 00:26:38.006 ioengine=libaio 00:26:38.006 direct=1 00:26:38.006 bs=4096 00:26:38.006 iodepth=1 00:26:38.006 norandommap=0 00:26:38.006 numjobs=1 00:26:38.006 00:26:38.006 verify_dump=1 00:26:38.006 verify_backlog=512 00:26:38.006 verify_state_save=0 00:26:38.006 do_verify=1 00:26:38.006 verify=crc32c-intel 00:26:38.006 [job0] 00:26:38.006 filename=/dev/nvme0n1 00:26:38.006 Could not set queue depth (nvme0n1) 00:26:38.006 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:38.006 fio-3.35 00:26:38.006 Starting 1 thread 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.346 true 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.346 true 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.346 true 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.346 true 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.346 18:28:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 true 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 true 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 true 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:43.900 true 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:43.900 18:28:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2542482 00:27:40.108 00:27:40.108 job0: (groupid=0, jobs=1): err= 0: pid=2542551: Thu Nov 28 18:29:50 2024 00:27:40.108 read: IOPS=49, BW=198KiB/s (203kB/s)(11.6MiB/60015msec) 00:27:40.108 slat (nsec): min=4494, max=65844, avg=15892.14, stdev=9411.35 00:27:40.108 clat (usec): min=268, max=40871k, avg=19885.10, stdev=749974.85 00:27:40.108 lat (usec): min=273, max=40871k, avg=19900.99, stdev=749975.40 00:27:40.108 clat percentiles (usec): 00:27:40.108 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 00:27:40.108 | 20.00th=[ 302], 30.00th=[ 314], 40.00th=[ 326], 00:27:40.108 | 50.00th=[ 338], 60.00th=[ 355], 70.00th=[ 379], 00:27:40.108 | 80.00th=[ 388], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:40.108 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:27:40.108 | 99.95th=[ 43779], 99.99th=[17112761] 00:27:40.108 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60015msec); 0 zone resets 00:27:40.108 slat (usec): min=5, max=11696, avg=21.97, stdev=276.24 00:27:40.108 clat (usec): min=206, max=1002, avg=265.18, stdev=48.16 00:27:40.108 lat (usec): min=215, max=12028, avg=287.15, stdev=282.89 00:27:40.108 clat percentiles (usec): 00:27:40.108 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 233], 00:27:40.108 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 260], 00:27:40.108 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 359], 00:27:40.108 | 99.00th=[ 408], 99.50th=[ 433], 99.90th=[ 725], 99.95th=[ 816], 00:27:40.108 | 99.99th=[ 1004] 00:27:40.108 bw ( KiB/s): min= 4096, max= 7008, per=100.00%, avg=4915.20, stdev=1277.32, samples=5 00:27:40.108 iops : min= 1024, max= 1752, avg=1228.80, stdev=319.33, samples=5 00:27:40.108 lat (usec) : 250=26.98%, 500=65.69%, 750=0.23%, 1000=0.05% 00:27:40.108 lat (msec) : 2=0.05%, 50=6.98%, >=2000=0.02% 00:27:40.108 cpu : usr=0.09%, sys=0.18%, ctx=6046, majf=0, minf=1 00:27:40.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.108 issued rwts: total=2970,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:40.108 00:27:40.108 Run status group 0 (all jobs): 00:27:40.108 READ: bw=198KiB/s (203kB/s), 198KiB/s-198KiB/s (203kB/s-203kB/s), io=11.6MiB (12.2MB), run=60015-60015msec 00:27:40.108 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60015-60015msec 00:27:40.108 00:27:40.108 Disk stats (read/write): 00:27:40.108 nvme0n1: ios=3066/3072, merge=0/0, ticks=18149/771, in_queue=18920, util=99.54% 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:40.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:40.108 nvmf hotplug test: fio successful as expected 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:40.108 rmmod nvme_tcp 00:27:40.108 rmmod nvme_fabrics 00:27:40.108 rmmod nvme_keyring 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 2542041 ']' 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 2542041 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 2542041 ']' 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 2542041 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542041 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:40.108 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:40.109 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542041' 00:27:40.109 killing process with pid 2542041 00:27:40.109 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 2542041 00:27:40.109 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 2542041 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.109 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.013 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.013 00:27:42.013 real 1m10.242s 00:27:42.013 user 4m16.126s 00:27:42.013 sys 0m7.247s 00:27:42.013 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.013 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:42.013 ************************************ 00:27:42.013 END TEST nvmf_initiator_timeout 00:27:42.013 ************************************ 00:27:42.013 18:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:42.013 18:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:42.014 18:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:42.014 18:29:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.014 18:29:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:43.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:43.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:43.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:43.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:43.915 ************************************ 00:27:43.915 START TEST nvmf_perf_adq 00:27:43.915 ************************************ 00:27:43.915 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:43.915 * Looking for test storage... 00:27:43.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:43.915 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:43.915 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:27:43.915 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.174 --rc genhtml_branch_coverage=1 00:27:44.174 --rc genhtml_function_coverage=1 00:27:44.174 --rc genhtml_legend=1 00:27:44.174 --rc geninfo_all_blocks=1 00:27:44.174 --rc geninfo_unexecuted_blocks=1 00:27:44.174 00:27:44.174 ' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.174 --rc genhtml_branch_coverage=1 00:27:44.174 --rc genhtml_function_coverage=1 00:27:44.174 --rc genhtml_legend=1 00:27:44.174 --rc geninfo_all_blocks=1 00:27:44.174 --rc geninfo_unexecuted_blocks=1 00:27:44.174 00:27:44.174 ' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.174 --rc genhtml_branch_coverage=1 00:27:44.174 --rc genhtml_function_coverage=1 00:27:44.174 --rc genhtml_legend=1 00:27:44.174 --rc geninfo_all_blocks=1 00:27:44.174 --rc geninfo_unexecuted_blocks=1 00:27:44.174 00:27:44.174 ' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.174 --rc genhtml_branch_coverage=1 00:27:44.174 --rc genhtml_function_coverage=1 00:27:44.174 --rc genhtml_legend=1 00:27:44.174 --rc geninfo_all_blocks=1 00:27:44.174 --rc geninfo_unexecuted_blocks=1 00:27:44.174 00:27:44.174 ' 00:27:44.174 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.175 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:46.079 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:47.015 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:49.545 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.822 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:27:54.823 00:27:54.823 --- 10.0.0.2 ping statistics --- 00:27:54.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.823 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:27:54.823 00:27:54.823 --- 10.0.0.1 ping statistics --- 00:27:54.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.823 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2554433 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2554433 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2554433 ']' 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.823 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.824 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.824 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:54.824 [2024-11-28 18:30:06.526991] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:27:54.824 [2024-11-28 18:30:06.527148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.824 [2024-11-28 18:30:06.691924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.824 [2024-11-28 18:30:06.843842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.824 [2024-11-28 18:30:06.843918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.824 [2024-11-28 18:30:06.843943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.824 [2024-11-28 18:30:06.843969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.824 [2024-11-28 18:30:06.843988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.824 [2024-11-28 18:30:06.846910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.824 [2024-11-28 18:30:06.846967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.824 [2024-11-28 18:30:06.846996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.824 [2024-11-28 18:30:06.846993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.390 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.390 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:55.390 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.390 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.390 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.649 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.908 [2024-11-28 18:30:07.912405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.908 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.908 Malloc1 00:27:55.908 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.908 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:55.908 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.908 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.166 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.166 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:56.166 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.166 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:56.167 [2024-11-28 18:30:08.026889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2554936 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:56.167 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:58.065 "tick_rate": 2700000000, 00:27:58.065 "poll_groups": [ 00:27:58.065 { 00:27:58.065 "name": "nvmf_tgt_poll_group_000", 00:27:58.065 "admin_qpairs": 1, 00:27:58.065 "io_qpairs": 1, 00:27:58.065 "current_admin_qpairs": 1, 00:27:58.065 "current_io_qpairs": 1, 00:27:58.065 "pending_bdev_io": 0, 00:27:58.065 "completed_nvme_io": 16872, 00:27:58.065 "transports": [ 00:27:58.065 { 00:27:58.065 "trtype": "TCP" 00:27:58.065 } 00:27:58.065 ] 00:27:58.065 }, 00:27:58.065 { 00:27:58.065 "name": "nvmf_tgt_poll_group_001", 00:27:58.065 "admin_qpairs": 0, 00:27:58.065 "io_qpairs": 1, 00:27:58.065 "current_admin_qpairs": 0, 00:27:58.065 "current_io_qpairs": 1, 00:27:58.065 "pending_bdev_io": 0, 00:27:58.065 "completed_nvme_io": 16423, 00:27:58.065 "transports": [ 00:27:58.065 { 00:27:58.065 "trtype": "TCP" 00:27:58.065 } 00:27:58.065 ] 00:27:58.065 }, 00:27:58.065 { 00:27:58.065 "name": "nvmf_tgt_poll_group_002", 00:27:58.065 "admin_qpairs": 0, 00:27:58.065 "io_qpairs": 1, 00:27:58.065 "current_admin_qpairs": 0, 00:27:58.065 "current_io_qpairs": 1, 00:27:58.065 "pending_bdev_io": 0, 00:27:58.065 "completed_nvme_io": 14742, 00:27:58.065 "transports": [ 00:27:58.065 { 00:27:58.065 "trtype": "TCP" 00:27:58.065 } 00:27:58.065 ] 00:27:58.065 }, 00:27:58.065 { 00:27:58.065 "name": "nvmf_tgt_poll_group_003", 00:27:58.065 "admin_qpairs": 0, 00:27:58.065 "io_qpairs": 1, 00:27:58.065 "current_admin_qpairs": 0, 00:27:58.065 "current_io_qpairs": 1, 00:27:58.065 "pending_bdev_io": 0, 00:27:58.065 "completed_nvme_io": 16228, 00:27:58.065 "transports": [ 00:27:58.065 { 00:27:58.065 "trtype": "TCP" 00:27:58.065 } 00:27:58.065 ] 00:27:58.065 } 00:27:58.065 ] 00:27:58.065 }' 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:58.065 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2554936 00:28:06.174 Initializing NVMe Controllers 00:28:06.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:06.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:06.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:06.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:06.174 Initialization complete. Launching workers. 00:28:06.174 ======================================================== 00:28:06.174 Latency(us) 00:28:06.174 Device Information : IOPS MiB/s Average min max 00:28:06.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9152.70 35.75 6993.21 2901.90 15011.14 00:28:06.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8920.30 34.84 7175.23 3124.03 11273.63 00:28:06.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8050.20 31.45 7950.31 3346.14 13365.36 00:28:06.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8958.60 34.99 7144.78 3272.90 10939.26 00:28:06.174 ======================================================== 00:28:06.174 Total : 35081.79 137.04 7297.82 2901.90 15011.14 00:28:06.174 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.174 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.174 rmmod nvme_tcp 00:28:06.432 rmmod nvme_fabrics 00:28:06.432 rmmod nvme_keyring 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2554433 ']' 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2554433 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2554433 ']' 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2554433 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554433 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554433' 00:28:06.432 killing process with pid 2554433 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2554433 00:28:06.432 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2554433 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.808 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.809 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.809 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.809 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.809 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.707 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.707 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:09.707 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:09.707 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:10.274 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:12.800 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.134 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:18.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:18.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:18.135 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:18.135 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:28:18.135 00:28:18.135 --- 10.0.0.2 ping statistics --- 00:28:18.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.135 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:28:18.135 00:28:18.135 --- 10.0.0.1 ping statistics --- 00:28:18.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.135 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:18.135 net.core.busy_poll = 1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:18.135 net.core.busy_read = 1 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:18.135 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.135 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2557931 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2557931 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2557931 ']' 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.136 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:18.394 [2024-11-28 18:30:30.261550] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:18.394 [2024-11-28 18:30:30.261697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.394 [2024-11-28 18:30:30.416029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:18.652 [2024-11-28 18:30:30.559942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.652 [2024-11-28 18:30:30.560025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.652 [2024-11-28 18:30:30.560051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.652 [2024-11-28 18:30:30.560075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.652 [2024-11-28 18:30:30.560102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.652 [2024-11-28 18:30:30.562965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.652 [2024-11-28 18:30:30.563037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:18.652 [2024-11-28 18:30:30.563204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.652 [2024-11-28 18:30:30.563209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.218 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.784 [2024-11-28 18:30:31.621037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.784 Malloc1 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:19.784 [2024-11-28 18:30:31.737053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2558129 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:19.784 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:21.684 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:21.684 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.684 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.685 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.685 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:21.685 "tick_rate": 2700000000, 00:28:21.685 "poll_groups": [ 00:28:21.685 { 00:28:21.685 "name": "nvmf_tgt_poll_group_000", 00:28:21.685 "admin_qpairs": 1, 00:28:21.685 "io_qpairs": 2, 00:28:21.685 "current_admin_qpairs": 1, 00:28:21.685 "current_io_qpairs": 2, 00:28:21.685 "pending_bdev_io": 0, 00:28:21.685 "completed_nvme_io": 18739, 00:28:21.685 "transports": [ 00:28:21.685 { 00:28:21.685 "trtype": "TCP" 00:28:21.685 } 00:28:21.685 ] 00:28:21.685 }, 00:28:21.685 { 00:28:21.685 "name": "nvmf_tgt_poll_group_001", 00:28:21.685 "admin_qpairs": 0, 00:28:21.685 "io_qpairs": 2, 00:28:21.685 "current_admin_qpairs": 0, 00:28:21.685 "current_io_qpairs": 2, 00:28:21.685 "pending_bdev_io": 0, 00:28:21.685 "completed_nvme_io": 19005, 00:28:21.685 "transports": [ 00:28:21.685 { 00:28:21.685 "trtype": "TCP" 00:28:21.685 } 00:28:21.685 ] 00:28:21.685 }, 00:28:21.685 { 00:28:21.685 "name": "nvmf_tgt_poll_group_002", 00:28:21.685 "admin_qpairs": 0, 00:28:21.685 "io_qpairs": 0, 00:28:21.685 "current_admin_qpairs": 0, 00:28:21.685 "current_io_qpairs": 0, 00:28:21.685 "pending_bdev_io": 0, 00:28:21.685 "completed_nvme_io": 0, 00:28:21.685 "transports": [ 00:28:21.685 { 00:28:21.685 "trtype": "TCP" 00:28:21.685 } 00:28:21.685 ] 00:28:21.685 }, 00:28:21.685 { 00:28:21.685 "name": "nvmf_tgt_poll_group_003", 00:28:21.685 "admin_qpairs": 0, 00:28:21.685 "io_qpairs": 0, 00:28:21.685 "current_admin_qpairs": 0, 00:28:21.685 "current_io_qpairs": 0, 00:28:21.685 "pending_bdev_io": 0, 00:28:21.685 "completed_nvme_io": 0, 00:28:21.685 "transports": [ 00:28:21.685 { 00:28:21.685 "trtype": "TCP" 00:28:21.685 } 00:28:21.685 ] 00:28:21.685 } 00:28:21.685 ] 00:28:21.685 }' 00:28:21.685 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:21.685 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:21.943 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:21.943 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:21.943 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2558129 00:28:30.077 Initializing NVMe Controllers 00:28:30.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:30.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:30.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:30.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:30.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:30.077 Initialization complete. Launching workers. 00:28:30.077 ======================================================== 00:28:30.077 Latency(us) 00:28:30.077 Device Information : IOPS MiB/s Average min max 00:28:30.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4590.50 17.93 13942.99 2508.58 57392.23 00:28:30.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4253.30 16.61 15064.73 2506.00 57252.97 00:28:30.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5722.80 22.35 11188.26 2191.58 57016.79 00:28:30.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6119.20 23.90 10459.61 1666.33 58204.29 00:28:30.077 ======================================================== 00:28:30.077 Total : 20685.80 80.80 12381.09 1666.33 58204.29 00:28:30.077 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.077 rmmod nvme_tcp 00:28:30.077 rmmod nvme_fabrics 00:28:30.077 rmmod nvme_keyring 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.077 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2557931 ']' 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2557931 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2557931 ']' 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2557931 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2557931 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2557931' 00:28:30.078 killing process with pid 2557931 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2557931 00:28:30.078 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2557931 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.454 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.356 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.356 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:33.356 00:28:33.356 real 0m49.498s 00:28:33.356 user 2m52.476s 00:28:33.356 sys 0m10.416s 00:28:33.356 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.356 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.356 ************************************ 00:28:33.356 END TEST nvmf_perf_adq 00:28:33.356 ************************************ 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:33.631 ************************************ 00:28:33.631 START TEST nvmf_shutdown 00:28:33.631 ************************************ 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:33.631 * Looking for test storage... 00:28:33.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:33.631 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.632 --rc genhtml_branch_coverage=1 00:28:33.632 --rc genhtml_function_coverage=1 00:28:33.632 --rc genhtml_legend=1 00:28:33.632 --rc geninfo_all_blocks=1 00:28:33.632 --rc geninfo_unexecuted_blocks=1 00:28:33.632 00:28:33.632 ' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.632 --rc genhtml_branch_coverage=1 00:28:33.632 --rc genhtml_function_coverage=1 00:28:33.632 --rc genhtml_legend=1 00:28:33.632 --rc geninfo_all_blocks=1 00:28:33.632 --rc geninfo_unexecuted_blocks=1 00:28:33.632 00:28:33.632 ' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.632 --rc genhtml_branch_coverage=1 00:28:33.632 --rc genhtml_function_coverage=1 00:28:33.632 --rc genhtml_legend=1 00:28:33.632 --rc geninfo_all_blocks=1 00:28:33.632 --rc geninfo_unexecuted_blocks=1 00:28:33.632 00:28:33.632 ' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.632 --rc genhtml_branch_coverage=1 00:28:33.632 --rc genhtml_function_coverage=1 00:28:33.632 --rc genhtml_legend=1 00:28:33.632 --rc geninfo_all_blocks=1 00:28:33.632 --rc geninfo_unexecuted_blocks=1 00:28:33.632 00:28:33.632 ' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:33.632 ************************************ 00:28:33.632 START TEST nvmf_shutdown_tc1 00:28:33.632 ************************************ 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.632 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:36.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:36.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:36.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:36.162 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:36.162 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:36.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:36.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:28:36.163 00:28:36.163 --- 10.0.0.2 ping statistics --- 00:28:36.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.163 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:36.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:36.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:28:36.163 00:28:36.163 --- 10.0.0.1 ping statistics --- 00:28:36.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:36.163 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2561431 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2561431 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2561431 ']' 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.163 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.163 [2024-11-28 18:30:48.030885] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:36.163 [2024-11-28 18:30:48.031034] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.163 [2024-11-28 18:30:48.179872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:36.420 [2024-11-28 18:30:48.323982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.420 [2024-11-28 18:30:48.324049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.420 [2024-11-28 18:30:48.324075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.421 [2024-11-28 18:30:48.324109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.421 [2024-11-28 18:30:48.324132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.421 [2024-11-28 18:30:48.327108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.421 [2024-11-28 18:30:48.327163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.421 [2024-11-28 18:30:48.327193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:36.421 [2024-11-28 18:30:48.327188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.984 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.984 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:36.984 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:36.984 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:36.984 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.984 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.984 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.984 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.985 [2024-11-28 18:30:49.017808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.985 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.243 Malloc1 00:28:37.243 [2024-11-28 18:30:49.157765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.243 Malloc2 00:28:37.243 Malloc3 00:28:37.501 Malloc4 00:28:37.501 Malloc5 00:28:37.758 Malloc6 00:28:37.758 Malloc7 00:28:37.758 Malloc8 00:28:38.016 Malloc9 00:28:38.016 Malloc10 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2561753 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2561753 /var/tmp/bdevperf.sock 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2561753 ']' 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:38.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.016 { 00:28:38.016 "params": { 00:28:38.016 "name": "Nvme$subsystem", 00:28:38.016 "trtype": "$TEST_TRANSPORT", 00:28:38.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.016 "adrfam": "ipv4", 00:28:38.016 "trsvcid": "$NVMF_PORT", 00:28:38.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.016 "hdgst": ${hdgst:-false}, 00:28:38.016 "ddgst": ${ddgst:-false} 00:28:38.016 }, 00:28:38.016 "method": "bdev_nvme_attach_controller" 00:28:38.016 } 00:28:38.016 EOF 00:28:38.016 )") 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.016 { 00:28:38.016 "params": { 00:28:38.016 "name": "Nvme$subsystem", 00:28:38.016 "trtype": "$TEST_TRANSPORT", 00:28:38.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.016 "adrfam": "ipv4", 00:28:38.016 "trsvcid": "$NVMF_PORT", 00:28:38.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.016 "hdgst": ${hdgst:-false}, 00:28:38.016 "ddgst": ${ddgst:-false} 00:28:38.016 }, 00:28:38.016 "method": "bdev_nvme_attach_controller" 00:28:38.016 } 00:28:38.016 EOF 00:28:38.016 )") 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.016 { 00:28:38.016 "params": { 00:28:38.016 "name": "Nvme$subsystem", 00:28:38.016 "trtype": "$TEST_TRANSPORT", 00:28:38.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.016 "adrfam": "ipv4", 00:28:38.016 "trsvcid": "$NVMF_PORT", 00:28:38.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.016 "hdgst": ${hdgst:-false}, 00:28:38.016 "ddgst": ${ddgst:-false} 00:28:38.016 }, 00:28:38.016 "method": "bdev_nvme_attach_controller" 00:28:38.016 } 00:28:38.016 EOF 00:28:38.016 )") 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.016 { 00:28:38.016 "params": { 00:28:38.016 "name": "Nvme$subsystem", 00:28:38.016 "trtype": "$TEST_TRANSPORT", 00:28:38.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.016 "adrfam": "ipv4", 00:28:38.016 "trsvcid": "$NVMF_PORT", 00:28:38.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.016 "hdgst": ${hdgst:-false}, 00:28:38.016 "ddgst": ${ddgst:-false} 00:28:38.016 }, 00:28:38.016 "method": "bdev_nvme_attach_controller" 00:28:38.016 } 00:28:38.016 EOF 00:28:38.016 )") 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.016 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.016 { 00:28:38.016 "params": { 00:28:38.016 "name": "Nvme$subsystem", 00:28:38.016 "trtype": "$TEST_TRANSPORT", 00:28:38.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.016 "adrfam": "ipv4", 00:28:38.016 "trsvcid": "$NVMF_PORT", 00:28:38.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.016 "hdgst": ${hdgst:-false}, 00:28:38.016 "ddgst": ${ddgst:-false} 00:28:38.016 }, 00:28:38.016 "method": "bdev_nvme_attach_controller" 00:28:38.016 } 00:28:38.016 EOF 00:28:38.016 )") 00:28:38.274 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.274 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.274 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.274 { 00:28:38.274 "params": { 00:28:38.274 "name": "Nvme$subsystem", 00:28:38.274 "trtype": "$TEST_TRANSPORT", 00:28:38.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.274 "adrfam": "ipv4", 00:28:38.274 "trsvcid": "$NVMF_PORT", 00:28:38.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.274 "hdgst": ${hdgst:-false}, 00:28:38.274 "ddgst": ${ddgst:-false} 00:28:38.274 }, 00:28:38.274 "method": "bdev_nvme_attach_controller" 00:28:38.274 } 00:28:38.274 EOF 00:28:38.274 )") 00:28:38.274 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.275 { 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme$subsystem", 00:28:38.275 "trtype": "$TEST_TRANSPORT", 00:28:38.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "$NVMF_PORT", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.275 "hdgst": ${hdgst:-false}, 00:28:38.275 "ddgst": ${ddgst:-false} 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 } 00:28:38.275 EOF 00:28:38.275 )") 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.275 { 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme$subsystem", 00:28:38.275 "trtype": "$TEST_TRANSPORT", 00:28:38.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "$NVMF_PORT", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.275 "hdgst": ${hdgst:-false}, 00:28:38.275 "ddgst": ${ddgst:-false} 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 } 00:28:38.275 EOF 00:28:38.275 )") 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.275 { 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme$subsystem", 00:28:38.275 "trtype": "$TEST_TRANSPORT", 00:28:38.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "$NVMF_PORT", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.275 "hdgst": ${hdgst:-false}, 00:28:38.275 "ddgst": ${ddgst:-false} 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 } 00:28:38.275 EOF 00:28:38.275 )") 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:38.275 { 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme$subsystem", 00:28:38.275 "trtype": "$TEST_TRANSPORT", 00:28:38.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "$NVMF_PORT", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.275 "hdgst": ${hdgst:-false}, 00:28:38.275 "ddgst": ${ddgst:-false} 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 } 00:28:38.275 EOF 00:28:38.275 )") 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:38.275 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme1", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme2", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme3", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme4", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme5", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme6", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme7", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme8", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme9", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 },{ 00:28:38.275 "params": { 00:28:38.275 "name": "Nvme10", 00:28:38.275 "trtype": "tcp", 00:28:38.275 "traddr": "10.0.0.2", 00:28:38.275 "adrfam": "ipv4", 00:28:38.275 "trsvcid": "4420", 00:28:38.275 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:38.275 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:38.275 "hdgst": false, 00:28:38.275 "ddgst": false 00:28:38.275 }, 00:28:38.275 "method": "bdev_nvme_attach_controller" 00:28:38.275 }' 00:28:38.275 [2024-11-28 18:30:50.186204] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:38.275 [2024-11-28 18:30:50.186353] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:38.275 [2024-11-28 18:30:50.330598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.533 [2024-11-28 18:30:50.462811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2561753 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:41.059 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:41.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2561753 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:41.993 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2561431 00:28:41.993 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:41.993 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:41.993 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:41.993 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.993 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.994 { 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme$subsystem", 00:28:41.994 "trtype": "$TEST_TRANSPORT", 00:28:41.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "$NVMF_PORT", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.994 "hdgst": ${hdgst:-false}, 00:28:41.994 "ddgst": ${ddgst:-false} 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 } 00:28:41.994 EOF 00:28:41.994 )") 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:41.994 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.994 "params": { 00:28:41.994 "name": "Nvme1", 00:28:41.994 "trtype": "tcp", 00:28:41.994 "traddr": "10.0.0.2", 00:28:41.994 "adrfam": "ipv4", 00:28:41.994 "trsvcid": "4420", 00:28:41.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.994 "hdgst": false, 00:28:41.994 "ddgst": false 00:28:41.994 }, 00:28:41.994 "method": "bdev_nvme_attach_controller" 00:28:41.994 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme2", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme3", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme4", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme5", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme6", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme7", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme8", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme9", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 },{ 00:28:41.995 "params": { 00:28:41.995 "name": "Nvme10", 00:28:41.995 "trtype": "tcp", 00:28:41.995 "traddr": "10.0.0.2", 00:28:41.995 "adrfam": "ipv4", 00:28:41.995 "trsvcid": "4420", 00:28:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:41.995 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:41.995 "hdgst": false, 00:28:41.995 "ddgst": false 00:28:41.995 }, 00:28:41.995 "method": "bdev_nvme_attach_controller" 00:28:41.995 }' 00:28:41.995 [2024-11-28 18:30:54.026010] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:41.995 [2024-11-28 18:30:54.026174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562188 ] 00:28:42.253 [2024-11-28 18:30:54.175355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.253 [2024-11-28 18:30:54.306219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.151 Running I/O for 1 seconds... 00:28:45.527 1476.00 IOPS, 92.25 MiB/s 00:28:45.527 Latency(us) 00:28:45.527 [2024-11-28T17:30:57.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.527 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme1n1 : 1.10 174.96 10.93 0.00 0.00 360524.86 26602.76 337097.77 00:28:45.527 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme2n1 : 1.14 168.13 10.51 0.00 0.00 370095.85 25437.68 307582.29 00:28:45.527 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme3n1 : 1.21 212.05 13.25 0.00 0.00 288815.41 44273.21 312242.63 00:28:45.527 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme4n1 : 1.09 180.22 11.26 0.00 0.00 329452.64 13204.29 316902.97 00:28:45.527 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme5n1 : 1.21 211.24 13.20 0.00 0.00 279963.31 23787.14 302921.96 00:28:45.527 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme6n1 : 1.20 214.18 13.39 0.00 0.00 270465.33 21845.33 287387.50 00:28:45.527 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme7n1 : 1.22 209.67 13.10 0.00 0.00 272274.20 22233.69 310689.19 00:28:45.527 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme8n1 : 1.23 208.75 13.05 0.00 0.00 268788.62 17185.00 315349.52 00:28:45.527 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme9n1 : 1.18 162.19 10.14 0.00 0.00 337676.01 23107.51 335544.32 00:28:45.527 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.527 Verification LBA range: start 0x0 length 0x400 00:28:45.527 Nvme10n1 : 1.20 160.07 10.00 0.00 0.00 336293.74 24855.13 352632.23 00:28:45.527 [2024-11-28T17:30:57.627Z] =================================================================================================================== 00:28:45.527 [2024-11-28T17:30:57.627Z] Total : 1901.47 118.84 0.00 0.00 306422.75 13204.29 352632.23 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.462 rmmod nvme_tcp 00:28:46.462 rmmod nvme_fabrics 00:28:46.462 rmmod nvme_keyring 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2561431 ']' 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2561431 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2561431 ']' 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2561431 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2561431 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2561431' 00:28:46.462 killing process with pid 2561431 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2561431 00:28:46.462 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2561431 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.749 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.655 00:28:51.655 real 0m17.613s 00:28:51.655 user 0m57.369s 00:28:51.655 sys 0m4.051s 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.655 ************************************ 00:28:51.655 END TEST nvmf_shutdown_tc1 00:28:51.655 ************************************ 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:51.655 ************************************ 00:28:51.655 START TEST nvmf_shutdown_tc2 00:28:51.655 ************************************ 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.655 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:51.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:51.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:51.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:51.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:28:51.656 00:28:51.656 --- 10.0.0.2 ping statistics --- 00:28:51.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.656 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:28:51.656 00:28:51.656 --- 10.0.0.1 ping statistics --- 00:28:51.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.656 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2563463 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2563463 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2563463 ']' 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.656 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.657 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.657 [2024-11-28 18:31:03.583531] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:51.657 [2024-11-28 18:31:03.583681] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.657 [2024-11-28 18:31:03.740599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:51.915 [2024-11-28 18:31:03.885224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.915 [2024-11-28 18:31:03.885308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.915 [2024-11-28 18:31:03.885334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.915 [2024-11-28 18:31:03.885358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.915 [2024-11-28 18:31:03.885377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.915 [2024-11-28 18:31:03.888161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.915 [2024-11-28 18:31:03.888264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.915 [2024-11-28 18:31:03.888314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.915 [2024-11-28 18:31:03.888320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.849 [2024-11-28 18:31:04.670415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.849 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.849 Malloc1 00:28:52.849 [2024-11-28 18:31:04.830313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.849 Malloc2 00:28:53.109 Malloc3 00:28:53.109 Malloc4 00:28:53.109 Malloc5 00:28:53.368 Malloc6 00:28:53.368 Malloc7 00:28:53.625 Malloc8 00:28:53.625 Malloc9 00:28:53.625 Malloc10 00:28:53.883 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.883 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2563782 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2563782 /var/tmp/bdevperf.sock 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2563782 ']' 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:53.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.884 { 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme$subsystem", 00:28:53.884 "trtype": "$TEST_TRANSPORT", 00:28:53.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "$NVMF_PORT", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.884 "hdgst": ${hdgst:-false}, 00:28:53.884 "ddgst": ${ddgst:-false} 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 } 00:28:53.884 EOF 00:28:53.884 )") 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:53.884 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme1", 00:28:53.884 "trtype": "tcp", 00:28:53.884 "traddr": "10.0.0.2", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "4420", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.884 "hdgst": false, 00:28:53.884 "ddgst": false 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 },{ 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme2", 00:28:53.884 "trtype": "tcp", 00:28:53.884 "traddr": "10.0.0.2", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "4420", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:53.884 "hdgst": false, 00:28:53.884 "ddgst": false 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 },{ 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme3", 00:28:53.884 "trtype": "tcp", 00:28:53.884 "traddr": "10.0.0.2", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "4420", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:53.884 "hdgst": false, 00:28:53.884 "ddgst": false 00:28:53.884 }, 00:28:53.884 "method": "bdev_nvme_attach_controller" 00:28:53.884 },{ 00:28:53.884 "params": { 00:28:53.884 "name": "Nvme4", 00:28:53.884 "trtype": "tcp", 00:28:53.884 "traddr": "10.0.0.2", 00:28:53.884 "adrfam": "ipv4", 00:28:53.884 "trsvcid": "4420", 00:28:53.884 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:53.884 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:53.884 "hdgst": false, 00:28:53.884 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 },{ 00:28:53.885 "params": { 00:28:53.885 "name": "Nvme5", 00:28:53.885 "trtype": "tcp", 00:28:53.885 "traddr": "10.0.0.2", 00:28:53.885 "adrfam": "ipv4", 00:28:53.885 "trsvcid": "4420", 00:28:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:53.885 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:53.885 "hdgst": false, 00:28:53.885 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 },{ 00:28:53.885 "params": { 00:28:53.885 "name": "Nvme6", 00:28:53.885 "trtype": "tcp", 00:28:53.885 "traddr": "10.0.0.2", 00:28:53.885 "adrfam": "ipv4", 00:28:53.885 "trsvcid": "4420", 00:28:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:53.885 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:53.885 "hdgst": false, 00:28:53.885 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 },{ 00:28:53.885 "params": { 00:28:53.885 "name": "Nvme7", 00:28:53.885 "trtype": "tcp", 00:28:53.885 "traddr": "10.0.0.2", 00:28:53.885 "adrfam": "ipv4", 00:28:53.885 "trsvcid": "4420", 00:28:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:53.885 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:53.885 "hdgst": false, 00:28:53.885 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 },{ 00:28:53.885 "params": { 00:28:53.885 "name": "Nvme8", 00:28:53.885 "trtype": "tcp", 00:28:53.885 "traddr": "10.0.0.2", 00:28:53.885 "adrfam": "ipv4", 00:28:53.885 "trsvcid": "4420", 00:28:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:53.885 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:53.885 "hdgst": false, 00:28:53.885 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 },{ 00:28:53.885 "params": { 00:28:53.885 "name": "Nvme9", 00:28:53.885 "trtype": "tcp", 00:28:53.885 "traddr": "10.0.0.2", 00:28:53.885 "adrfam": "ipv4", 00:28:53.885 "trsvcid": "4420", 00:28:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:53.885 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:53.885 "hdgst": false, 00:28:53.885 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 },{ 00:28:53.885 "params": { 00:28:53.885 "name": "Nvme10", 00:28:53.885 "trtype": "tcp", 00:28:53.885 "traddr": "10.0.0.2", 00:28:53.885 "adrfam": "ipv4", 00:28:53.885 "trsvcid": "4420", 00:28:53.885 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:53.885 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:53.885 "hdgst": false, 00:28:53.885 "ddgst": false 00:28:53.885 }, 00:28:53.885 "method": "bdev_nvme_attach_controller" 00:28:53.885 }' 00:28:53.885 [2024-11-28 18:31:05.850222] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:28:53.885 [2024-11-28 18:31:05.850374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563782 ] 00:28:54.143 [2024-11-28 18:31:05.989187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.143 [2024-11-28 18:31:06.117438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.043 Running I/O for 10 seconds... 00:28:56.610 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.610 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:56.610 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:56.610 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.610 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=76 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 76 -ge 100 ']' 00:28:56.611 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2563782 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2563782 ']' 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2563782 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:56.869 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.127 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563782 00:28:57.127 1439.00 IOPS, 89.94 MiB/s [2024-11-28T17:31:09.227Z] 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.127 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.127 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563782' 00:28:57.127 killing process with pid 2563782 00:28:57.127 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2563782 00:28:57.127 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2563782 00:28:57.127 Received shutdown signal, test time was about 1.128885 seconds 00:28:57.127 00:28:57.128 Latency(us) 00:28:57.128 [2024-11-28T17:31:09.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.128 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme1n1 : 1.07 187.58 11.72 0.00 0.00 332621.17 8204.14 318456.41 00:28:57.128 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme2n1 : 1.07 192.79 12.05 0.00 0.00 317603.18 5485.61 304475.40 00:28:57.128 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme3n1 : 1.12 228.08 14.26 0.00 0.00 267534.79 20583.16 302921.96 00:28:57.128 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme4n1 : 1.11 234.36 14.65 0.00 0.00 253770.89 9709.04 302921.96 00:28:57.128 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme5n1 : 1.09 175.93 11.00 0.00 0.00 333194.43 25631.86 304475.40 00:28:57.128 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme6n1 : 1.11 176.88 11.06 0.00 0.00 322621.15 9029.40 307582.29 00:28:57.128 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme7n1 : 1.08 177.68 11.11 0.00 0.00 316066.07 22524.97 306028.85 00:28:57.128 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme8n1 : 1.13 226.95 14.18 0.00 0.00 243329.14 20583.16 301368.51 00:28:57.128 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme9n1 : 1.11 173.47 10.84 0.00 0.00 312041.37 26602.76 323116.75 00:28:57.128 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.128 Verification LBA range: start 0x0 length 0x400 00:28:57.128 Nvme10n1 : 1.12 171.84 10.74 0.00 0.00 308910.71 26020.22 330883.98 00:28:57.128 [2024-11-28T17:31:09.228Z] =================================================================================================================== 00:28:57.128 [2024-11-28T17:31:09.228Z] Total : 1945.57 121.60 0.00 0.00 296847.53 5485.61 330883.98 00:28:58.097 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2563463 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.058 rmmod nvme_tcp 00:28:59.058 rmmod nvme_fabrics 00:28:59.058 rmmod nvme_keyring 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:59.058 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2563463 ']' 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2563463 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2563463 ']' 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2563463 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.059 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563463 00:28:59.316 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.316 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.316 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563463' 00:28:59.316 killing process with pid 2563463 00:28:59.316 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2563463 00:28:59.316 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2563463 00:29:01.845 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.845 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:01.845 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:01.845 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.846 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.379 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.379 00:29:04.379 real 0m12.652s 00:29:04.379 user 0m42.860s 00:29:04.379 sys 0m2.051s 00:29:04.379 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.379 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.379 ************************************ 00:29:04.379 END TEST nvmf_shutdown_tc2 00:29:04.379 ************************************ 00:29:04.379 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:04.379 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.380 ************************************ 00:29:04.380 START TEST nvmf_shutdown_tc3 00:29:04.380 ************************************ 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.380 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.380 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:29:04.381 00:29:04.381 --- 10.0.0.2 ping statistics --- 00:29:04.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.381 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:29:04.381 00:29:04.381 --- 10.0.0.1 ping statistics --- 00:29:04.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.381 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2565089 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2565089 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2565089 ']' 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.381 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.381 [2024-11-28 18:31:16.354037] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:04.381 [2024-11-28 18:31:16.354230] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.640 [2024-11-28 18:31:16.510431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.640 [2024-11-28 18:31:16.653910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.640 [2024-11-28 18:31:16.653996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.640 [2024-11-28 18:31:16.654021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.640 [2024-11-28 18:31:16.654045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.640 [2024-11-28 18:31:16.654064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.640 [2024-11-28 18:31:16.656953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.640 [2024-11-28 18:31:16.657065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.640 [2024-11-28 18:31:16.657143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.640 [2024-11-28 18:31:16.657148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.206 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.464 [2024-11-28 18:31:17.308058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.464 18:31:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.464 Malloc1 00:29:05.464 [2024-11-28 18:31:17.453201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.464 Malloc2 00:29:05.722 Malloc3 00:29:05.722 Malloc4 00:29:05.980 Malloc5 00:29:05.980 Malloc6 00:29:05.980 Malloc7 00:29:06.238 Malloc8 00:29:06.238 Malloc9 00:29:06.497 Malloc10 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2565400 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2565400 /var/tmp/bdevperf.sock 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2565400 ']' 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.497 { 00:29:06.497 "params": { 00:29:06.497 "name": "Nvme$subsystem", 00:29:06.497 "trtype": "$TEST_TRANSPORT", 00:29:06.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.497 "adrfam": "ipv4", 00:29:06.497 "trsvcid": "$NVMF_PORT", 00:29:06.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.497 "hdgst": ${hdgst:-false}, 00:29:06.497 "ddgst": ${ddgst:-false} 00:29:06.497 }, 00:29:06.497 "method": "bdev_nvme_attach_controller" 00:29:06.497 } 00:29:06.497 EOF 00:29:06.497 )") 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.497 { 00:29:06.497 "params": { 00:29:06.497 "name": "Nvme$subsystem", 00:29:06.497 "trtype": "$TEST_TRANSPORT", 00:29:06.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.497 "adrfam": "ipv4", 00:29:06.497 "trsvcid": "$NVMF_PORT", 00:29:06.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.497 "hdgst": ${hdgst:-false}, 00:29:06.497 "ddgst": ${ddgst:-false} 00:29:06.497 }, 00:29:06.497 "method": "bdev_nvme_attach_controller" 00:29:06.497 } 00:29:06.497 EOF 00:29:06.497 )") 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.497 { 00:29:06.497 "params": { 00:29:06.497 "name": "Nvme$subsystem", 00:29:06.497 "trtype": "$TEST_TRANSPORT", 00:29:06.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.497 "adrfam": "ipv4", 00:29:06.497 "trsvcid": "$NVMF_PORT", 00:29:06.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.497 "hdgst": ${hdgst:-false}, 00:29:06.497 "ddgst": ${ddgst:-false} 00:29:06.497 }, 00:29:06.497 "method": "bdev_nvme_attach_controller" 00:29:06.497 } 00:29:06.497 EOF 00:29:06.497 )") 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.497 { 00:29:06.497 "params": { 00:29:06.497 "name": "Nvme$subsystem", 00:29:06.497 "trtype": "$TEST_TRANSPORT", 00:29:06.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.497 "adrfam": "ipv4", 00:29:06.497 "trsvcid": "$NVMF_PORT", 00:29:06.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.497 "hdgst": ${hdgst:-false}, 00:29:06.497 "ddgst": ${ddgst:-false} 00:29:06.497 }, 00:29:06.497 "method": "bdev_nvme_attach_controller" 00:29:06.497 } 00:29:06.497 EOF 00:29:06.497 )") 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.497 { 00:29:06.497 "params": { 00:29:06.497 "name": "Nvme$subsystem", 00:29:06.497 "trtype": "$TEST_TRANSPORT", 00:29:06.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.497 "adrfam": "ipv4", 00:29:06.497 "trsvcid": "$NVMF_PORT", 00:29:06.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.497 "hdgst": ${hdgst:-false}, 00:29:06.497 "ddgst": ${ddgst:-false} 00:29:06.497 }, 00:29:06.497 "method": "bdev_nvme_attach_controller" 00:29:06.497 } 00:29:06.497 EOF 00:29:06.497 )") 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.497 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.497 { 00:29:06.497 "params": { 00:29:06.497 "name": "Nvme$subsystem", 00:29:06.497 "trtype": "$TEST_TRANSPORT", 00:29:06.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "$NVMF_PORT", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.498 "hdgst": ${hdgst:-false}, 00:29:06.498 "ddgst": ${ddgst:-false} 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 } 00:29:06.498 EOF 00:29:06.498 )") 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.498 { 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme$subsystem", 00:29:06.498 "trtype": "$TEST_TRANSPORT", 00:29:06.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "$NVMF_PORT", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.498 "hdgst": ${hdgst:-false}, 00:29:06.498 "ddgst": ${ddgst:-false} 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 } 00:29:06.498 EOF 00:29:06.498 )") 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.498 { 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme$subsystem", 00:29:06.498 "trtype": "$TEST_TRANSPORT", 00:29:06.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "$NVMF_PORT", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.498 "hdgst": ${hdgst:-false}, 00:29:06.498 "ddgst": ${ddgst:-false} 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 } 00:29:06.498 EOF 00:29:06.498 )") 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.498 { 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme$subsystem", 00:29:06.498 "trtype": "$TEST_TRANSPORT", 00:29:06.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "$NVMF_PORT", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.498 "hdgst": ${hdgst:-false}, 00:29:06.498 "ddgst": ${ddgst:-false} 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 } 00:29:06.498 EOF 00:29:06.498 )") 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:06.498 { 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme$subsystem", 00:29:06.498 "trtype": "$TEST_TRANSPORT", 00:29:06.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "$NVMF_PORT", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.498 "hdgst": ${hdgst:-false}, 00:29:06.498 "ddgst": ${ddgst:-false} 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 } 00:29:06.498 EOF 00:29:06.498 )") 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:06.498 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme1", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme2", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme3", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme4", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme5", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme6", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme7", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme8", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme9", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 },{ 00:29:06.498 "params": { 00:29:06.498 "name": "Nvme10", 00:29:06.498 "trtype": "tcp", 00:29:06.498 "traddr": "10.0.0.2", 00:29:06.498 "adrfam": "ipv4", 00:29:06.498 "trsvcid": "4420", 00:29:06.498 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.498 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.498 "hdgst": false, 00:29:06.498 "ddgst": false 00:29:06.498 }, 00:29:06.498 "method": "bdev_nvme_attach_controller" 00:29:06.498 }' 00:29:06.498 [2024-11-28 18:31:18.484164] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:06.498 [2024-11-28 18:31:18.484346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565400 ] 00:29:06.756 [2024-11-28 18:31:18.624949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.756 [2024-11-28 18:31:18.753420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.657 Running I/O for 10 seconds... 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:09.224 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2565089 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2565089 ']' 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2565089 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.482 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565089 00:29:09.756 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:09.756 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:09.757 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565089' 00:29:09.757 killing process with pid 2565089 00:29:09.757 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2565089 00:29:09.757 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2565089 00:29:09.757 [2024-11-28 18:31:21.598019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.598989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.599351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.603525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.757 [2024-11-28 18:31:21.603582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.757 [2024-11-28 18:31:21.603607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same [2024-11-28 18:31:21.603620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nswith the state(6) to be set 00:29:09.757 id:0 cdw10:00000000 cdw11:00000000 00:29:09.757 [2024-11-28 18:31:21.603648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.757 [2024-11-28 18:31:21.603651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.603671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-28 18:31:21.603673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same id:0 cdw10:00000000 cdw11:00000000 00:29:09.757 with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.603694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same [2024-11-28 18:31:21.603694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cwith the state(6) to be set 00:29:09.757 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.757 [2024-11-28 18:31:21.603715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.603719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.757 [2024-11-28 18:31:21.603733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.757 [2024-11-28 18:31:21.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.758 [2024-11-28 18:31:21.603751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.603997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.604711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.607982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.758 [2024-11-28 18:31:21.608353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.608996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.609018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007880 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.611572] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.759 [2024-11-28 18:31:21.612571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.612635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-28 18:31:21.612691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.612748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.612801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.612889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.612937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.612954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.612971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.612992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.613006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.613013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.613023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.613055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128[2024-11-28 18:31:21.613059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.613086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-28 18:31:21.613088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:09.759 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.759 [2024-11-28 18:31:21.613117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.759 [2024-11-28 18:31:21.613123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.759 [2024-11-28 18:31:21.613137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:12[2024-11-28 18:31:21.613174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-28 18:31:21.613194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:09.760 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:12[2024-11-28 18:31:21.613329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same [2024-11-28 18:31:21.613349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:09.760 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:12[2024-11-28 18:31:21.613504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-28 18:31:21.613666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:12[2024-11-28 18:31:21.613840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007c80 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 with the state(6) to be set 00:29:09.760 [2024-11-28 18:31:21.613860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.613953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.613978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.614001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.614026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.614048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.614105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.614131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.614157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.614180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.614204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-11-28 18:31:21.614227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.760 [2024-11-28 18:31:21.614252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.614967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.614991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.761 [2024-11-28 18:31:21.615804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.761 [2024-11-28 18:31:21.615825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.615849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.762 [2024-11-28 18:31:21.615870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.615892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fb600 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.616370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.616633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.616878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.616970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.616990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.617012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.762 [2024-11-28 18:31:21.617033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.762 [2024-11-28 18:31:21.617051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.617148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:09.762 [2024-11-28 18:31:21.621316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.621985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.762 [2024-11-28 18:31:21.622321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.622550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.624996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.625316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.763 [2024-11-28 18:31:21.627901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.627921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.627940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.627959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.627977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.627996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.628991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.629682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.632991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.764 [2024-11-28 18:31:21.633519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.633991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.634202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.636982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.765 [2024-11-28 18:31:21.637239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.637641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:09.766 [2024-11-28 18:31:21.637748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:09.766 [2024-11-28 18:31:21.637853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.637886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.637925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.637952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.637974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.637994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.638153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.638411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:09.766 [2024-11-28 18:31:21.638460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:09.766 [2024-11-28 18:31:21.638533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.638777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.638931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.638950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.639010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.639029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.639103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.639127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.639149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.639170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.639191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.766 [2024-11-28 18:31:21.639211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.639230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:09.766 [2024-11-28 18:31:21.639954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.766 [2024-11-28 18:31:21.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.640027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.766 [2024-11-28 18:31:21.640051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.640089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.766 [2024-11-28 18:31:21.640121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.640147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.766 [2024-11-28 18:31:21.640169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.766 [2024-11-28 18:31:21.640194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.640963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.640986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.641958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.641979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.642002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.642040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.642065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.642087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.767 [2024-11-28 18:31:21.642120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.767 [2024-11-28 18:31:21.642142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.642958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.642979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.643998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.768 [2024-11-28 18:31:21.644427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.768 [2024-11-28 18:31:21.644451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.644951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.644978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.645973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.645995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.769 [2024-11-28 18:31:21.646375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.769 [2024-11-28 18:31:21.646397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.646421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.646443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.646468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.646490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.646515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.646538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.646563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.646585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.646609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.646631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.646654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9f80 is same with the state(6) to be set 00:29:09.770 [2024-11-28 18:31:21.650525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:09.770 [2024-11-28 18:31:21.650788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-28 18:31:21.650839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:09.770 [2024-11-28 18:31:21.650869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:09.770 [2024-11-28 18:31:21.650941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.651005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.651107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.770 [2024-11-28 18:31:21.651140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.651170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.770 [2024-11-28 18:31:21.651191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.651213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.770 [2024-11-28 18:31:21.651242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.651264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:09.770 [2024-11-28 18:31:21.651286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.651305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:09.770 [2024-11-28 18:31:21.651363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.651411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.651463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.651599] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.770 [2024-11-28 18:31:21.652693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:09.770 [2024-11-28 18:31:21.652760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.652886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.770 [2024-11-28 18:31:21.652923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:09.770 [2024-11-28 18:31:21.652947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:09.770 [2024-11-28 18:31:21.652975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:09.770 [2024-11-28 18:31:21.653592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.653957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.653980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.770 [2024-11-28 18:31:21.654326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.770 [2024-11-28 18:31:21.654351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.654998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.655969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.655991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.656016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.656038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.656062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.656085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.656118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.771 [2024-11-28 18:31:21.656141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.771 [2024-11-28 18:31:21.656166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.656706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.656728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa200 is same with the state(6) to be set 00:29:09.772 [2024-11-28 18:31:21.658345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.658978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.658999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.772 [2024-11-28 18:31:21.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.772 [2024-11-28 18:31:21.659701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.659723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.659748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.659770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.659795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.659833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.659859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.659881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.659906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.659928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.659953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.659975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.659999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.660962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.660987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.661419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.773 [2024-11-28 18:31:21.661441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.773 [2024-11-28 18:31:21.664218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:09.773 [2024-11-28 18:31:21.664280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:09.773 [2024-11-28 18:31:21.664383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:09.773 [2024-11-28 18:31:21.664420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:09.774 [2024-11-28 18:31:21.664445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:09.774 [2024-11-28 18:31:21.664472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:09.774 [2024-11-28 18:31:21.664496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:09.774 [2024-11-28 18:31:21.664650] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:09.774 [2024-11-28 18:31:21.664897] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.774 [2024-11-28 18:31:21.664995] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.774 [2024-11-28 18:31:21.665084] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.774 [2024-11-28 18:31:21.665179] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:09.774 [2024-11-28 18:31:21.665247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.665965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.665989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.774 [2024-11-28 18:31:21.666833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.774 [2024-11-28 18:31:21.666855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.666879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.666900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.666925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.666951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.666976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.666998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.667962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.667983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.668006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.668028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.668052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.775 [2024-11-28 18:31:21.668074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.775 [2024-11-28 18:31:21.668387] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:09.775 [2024-11-28 18:31:21.668703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-28 18:31:21.668746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:09.775 [2024-11-28 18:31:21.668772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:09.775 [2024-11-28 18:31:21.668879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-28 18:31:21.668915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:09.775 [2024-11-28 18:31:21.668938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:09.775 [2024-11-28 18:31:21.669031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.775 [2024-11-28 18:31:21.669065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:09.775 [2024-11-28 18:31:21.669088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:09.775 [2024-11-28 18:31:21.669120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:09.775 [2024-11-28 18:31:21.669141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:09.775 [2024-11-28 18:31:21.669161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:09.775 [2024-11-28 18:31:21.669181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:09.775 [2024-11-28 18:31:21.670272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.670970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.670991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.671965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.671986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.672011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.672033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.672058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.672079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.672111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.672135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.776 [2024-11-28 18:31:21.672160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.776 [2024-11-28 18:31:21.672182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.672972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.672997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.673508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.673530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa700 is same with the state(6) to be set 00:29:09.777 [2024-11-28 18:31:21.675055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.777 [2024-11-28 18:31:21.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.777 [2024-11-28 18:31:21.675815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.675836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.675860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.675880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.675921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.675944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.675973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.675995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.676960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.676981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.778 [2024-11-28 18:31:21.677635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.778 [2024-11-28 18:31:21.677659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.677969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.677993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.678014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.678039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.678061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.678084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.678119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.678142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fa980 is same with the state(6) to be set 00:29:09.779 [2024-11-28 18:31:21.679671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.679703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.679739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.679762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.679787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.679833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.679860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.679885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.679907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.679932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.679953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.679993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.779 [2024-11-28 18:31:21.680836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.779 [2024-11-28 18:31:21.680858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.680882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.680904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.680929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.680950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.680974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.680996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.681959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.681981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.780 [2024-11-28 18:31:21.682683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.780 [2024-11-28 18:31:21.682705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fac00 is same with the state(6) to be set 00:29:09.781 [2024-11-28 18:31:21.684229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.684974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.684995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.685970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.685994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.686015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.686039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.781 [2024-11-28 18:31:21.686072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.781 [2024-11-28 18:31:21.686104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.686968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.686989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.687034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.687079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.687134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.687180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.782 [2024-11-28 18:31:21.687270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:09.782 [2024-11-28 18:31:21.687292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001fae80 is same with the state(6) to be set 00:29:09.782 [2024-11-28 18:31:21.693132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:09.782 [2024-11-28 18:31:21.693218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:09.782 [2024-11-28 18:31:21.693256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:09.782 [2024-11-28 18:31:21.693287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:09.782 [2024-11-28 18:31:21.693378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:09.782 [2024-11-28 18:31:21.693415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:09.782 [2024-11-28 18:31:21.693444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:09.782 [2024-11-28 18:31:21.693494] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:09.782 [2024-11-28 18:31:21.693530] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:09.782 [2024-11-28 18:31:21.693573] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:09.782 [2024-11-28 18:31:21.693619] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:09.782 [2024-11-28 18:31:21.693647] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:09.782 [2024-11-28 18:31:21.693976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:09.782 [2024-11-28 18:31:21.694017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:09.782 task offset: 8832 on job bdev=Nvme10n1 fails 00:29:09.782 00:29:09.782 Latency(us) 00:29:09.782 [2024-11-28T17:31:21.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.782 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme1n1 ended in about 0.91 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.782 Nvme1n1 : 0.91 140.25 8.77 70.13 0.00 300650.95 21554.06 296708.17 00:29:09.782 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme2n1 ended in about 0.92 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.782 Nvme2n1 : 0.92 138.73 8.67 69.36 0.00 297383.76 26991.12 278066.82 00:29:09.782 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme3n1 ended in about 0.93 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.782 Nvme3n1 : 0.93 138.03 8.63 69.01 0.00 292253.90 21651.15 302921.96 00:29:09.782 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme4n1 ended in about 0.94 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.782 Nvme4n1 : 0.94 140.51 8.78 68.13 0.00 283859.05 9806.13 301368.51 00:29:09.782 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme5n1 ended in about 0.94 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.782 Nvme5n1 : 0.94 135.59 8.47 67.80 0.00 284633.06 24563.86 292047.83 00:29:09.782 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme6n1 ended in about 0.95 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.782 Nvme6n1 : 0.95 134.94 8.43 67.47 0.00 279626.33 22719.15 304475.40 00:29:09.782 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.782 Job: Nvme7n1 ended in about 0.95 seconds with error 00:29:09.782 Verification LBA range: start 0x0 length 0x400 00:29:09.783 Nvme7n1 : 0.95 134.29 8.39 67.15 0.00 274526.56 30874.74 298261.62 00:29:09.783 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.783 Job: Nvme8n1 ended in about 0.96 seconds with error 00:29:09.783 Verification LBA range: start 0x0 length 0x400 00:29:09.783 Nvme8n1 : 0.96 137.83 8.61 62.65 0.00 268709.48 20680.25 304475.40 00:29:09.783 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.783 Job: Nvme9n1 ended in about 0.91 seconds with error 00:29:09.783 Verification LBA range: start 0x0 length 0x400 00:29:09.783 Nvme9n1 : 0.91 139.94 8.75 69.97 0.00 248642.37 13107.20 312242.63 00:29:09.783 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:09.783 Job: Nvme10n1 ended in about 0.90 seconds with error 00:29:09.783 Verification LBA range: start 0x0 length 0x400 00:29:09.783 Nvme10n1 : 0.90 76.47 4.78 70.93 0.00 343678.74 23204.60 333990.87 00:29:09.783 [2024-11-28T17:31:21.883Z] =================================================================================================================== 00:29:09.783 [2024-11-28T17:31:21.883Z] Total : 1316.59 82.29 682.59 0.00 285608.32 9806.13 333990.87 00:29:09.783 [2024-11-28 18:31:21.782146] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:09.783 [2024-11-28 18:31:21.782621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.782678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7f00 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.782711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7f00 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.782829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.782863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.782886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.782993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.783027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4d00 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.783050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4d00 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.783219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.783254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5700 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.783278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5700 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.783301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.783323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.783348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.783373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.783398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.783417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.783445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.783466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.783503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.783521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.783540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.783558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.786211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:09.783 [2024-11-28 18:31:21.786449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.786486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.786511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.786618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.786651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6b00 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.786674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6b00 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.786710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7f00 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.786747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.786776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4d00 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.786804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5700 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.786916] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:09.783 [2024-11-28 18:31:21.786952] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:09.783 [2024-11-28 18:31:21.786982] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:09.783 [2024-11-28 18:31:21.787010] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:29:09.783 [2024-11-28 18:31:21.787898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.783 [2024-11-28 18:31:21.787937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:09.783 [2024-11-28 18:31:21.787961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(6) to be set 00:29:09.783 [2024-11-28 18:31:21.788001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.788030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6b00 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.788055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.788074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.788108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.788132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.788154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.788173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.788192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.788210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.788230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.788250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.788268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.788286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.788306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.788324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.788342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.788360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.788498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:09.783 [2024-11-28 18:31:21.788532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:09.783 [2024-11-28 18:31:21.788557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:09.783 [2024-11-28 18:31:21.788625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:09.783 [2024-11-28 18:31:21.788654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.788674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:09.783 [2024-11-28 18:31:21.788692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:09.783 [2024-11-28 18:31:21.788712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:09.783 [2024-11-28 18:31:21.788731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:09.783 [2024-11-28 18:31:21.788749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:09.784 [2024-11-28 18:31:21.788767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:09.784 [2024-11-28 18:31:21.788785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:09.784 [2024-11-28 18:31:21.788992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-28 18:31:21.789028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3900 with addr=10.0.0.2, port=4420 00:29:09.784 [2024-11-28 18:31:21.789052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3900 is same with the state(6) to be set 00:29:09.784 [2024-11-28 18:31:21.789202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-28 18:31:21.789236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2f00 with addr=10.0.0.2, port=4420 00:29:09.784 [2024-11-28 18:31:21.789260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2f00 is same with the state(6) to be set 00:29:09.784 [2024-11-28 18:31:21.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.784 [2024-11-28 18:31:21.789450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f7500 with addr=10.0.0.2, port=4420 00:29:09.784 [2024-11-28 18:31:21.789473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f7500 is same with the state(6) to be set 00:29:09.784 [2024-11-28 18:31:21.789494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:09.784 [2024-11-28 18:31:21.789513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:09.784 [2024-11-28 18:31:21.789532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:09.784 [2024-11-28 18:31:21.789552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:09.784 [2024-11-28 18:31:21.789618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3900 (9): Bad file descriptor 00:29:09.784 [2024-11-28 18:31:21.789653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:29:09.784 [2024-11-28 18:31:21.789698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:29:09.784 [2024-11-28 18:31:21.789758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:09.784 [2024-11-28 18:31:21.789784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:09.784 [2024-11-28 18:31:21.789804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:09.784 [2024-11-28 18:31:21.789822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:09.784 [2024-11-28 18:31:21.789843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:09.784 [2024-11-28 18:31:21.789862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:09.784 [2024-11-28 18:31:21.789881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:09.784 [2024-11-28 18:31:21.789900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:09.784 [2024-11-28 18:31:21.789920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:09.784 [2024-11-28 18:31:21.789938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:09.784 [2024-11-28 18:31:21.789956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:09.784 [2024-11-28 18:31:21.789974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:12.340 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2565400 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2565400 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2565400 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:13.716 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.717 rmmod nvme_tcp 00:29:13.717 rmmod nvme_fabrics 00:29:13.717 rmmod nvme_keyring 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2565089 ']' 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2565089 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2565089 ']' 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2565089 00:29:13.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2565089) - No such process 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2565089 is not found' 00:29:13.717 Process with pid 2565089 is not found 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.717 18:31:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.621 00:29:15.621 real 0m11.500s 00:29:15.621 user 0m33.779s 00:29:15.621 sys 0m1.954s 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:15.621 ************************************ 00:29:15.621 END TEST nvmf_shutdown_tc3 00:29:15.621 ************************************ 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.621 ************************************ 00:29:15.621 START TEST nvmf_shutdown_tc4 00:29:15.621 ************************************ 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:15.621 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:15.621 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:15.621 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.621 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:15.621 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.622 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.880 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:15.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:29:15.881 00:29:15.881 --- 10.0.0.2 ping statistics --- 00:29:15.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.881 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:29:15.881 00:29:15.881 --- 10.0.0.1 ping statistics --- 00:29:15.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.881 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2566571 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2566571 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2566571 ']' 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.881 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.881 [2024-11-28 18:31:27.857205] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:29:15.881 [2024-11-28 18:31:27.857343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.139 [2024-11-28 18:31:28.013634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.139 [2024-11-28 18:31:28.157013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.139 [2024-11-28 18:31:28.157101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.139 [2024-11-28 18:31:28.157129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.139 [2024-11-28 18:31:28.157154] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.139 [2024-11-28 18:31:28.157174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.139 [2024-11-28 18:31:28.160212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.139 [2024-11-28 18:31:28.160325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.139 [2024-11-28 18:31:28.160374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.139 [2024-11-28 18:31:28.160396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:17.072 [2024-11-28 18:31:28.886303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.072 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.073 18:31:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:17.073 Malloc1 00:29:17.073 [2024-11-28 18:31:29.032166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.073 Malloc2 00:29:17.332 Malloc3 00:29:17.332 Malloc4 00:29:17.332 Malloc5 00:29:17.590 Malloc6 00:29:17.590 Malloc7 00:29:17.848 Malloc8 00:29:17.848 Malloc9 00:29:17.848 Malloc10 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2566887 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:17.848 18:31:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:18.106 [2024-11-28 18:31:30.055248] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2566571 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2566571 ']' 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2566571 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566571 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566571' 00:29:23.375 killing process with pid 2566571 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2566571 00:29:23.375 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2566571 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 [2024-11-28 18:31:35.009013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 [2024-11-28 18:31:35.011251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 Write completed with error (sct=0, sc=8) 00:29:23.375 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 [2024-11-28 18:31:35.014119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 Write completed with error (sct=0, sc=8) 00:29:23.376 starting I/O failed: -6 00:29:23.376 [2024-11-28 18:31:35.024177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.376 NVMe io qpair process completion error 00:29:23.376 [2024-11-28 18:31:35.027821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:23.376 [2024-11-28 18:31:35.027879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:23.376 [2024-11-28 18:31:35.027903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:23.376 [2024-11-28 18:31:35.027922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006880 is same with the state(6) to be set 00:29:23.376 [2024-11-28 18:31:35.028155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.376 [2024-11-28 18:31:35.028198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.028357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.030928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.030978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.031009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.031030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.031049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:29:23.377 [2024-11-28 18:31:35.031068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 [2024-11-28 18:31:35.033703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 [2024-11-28 18:31:35.035917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.377 starting I/O failed: -6 00:29:23.377 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 [2024-11-28 18:31:35.038758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 [2024-11-28 18:31:35.049084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.378 NVMe io qpair process completion error 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 starting I/O failed: -6 00:29:23.378 Write completed with error (sct=0, sc=8) 00:29:23.378 [2024-11-28 18:31:35.051227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 [2024-11-28 18:31:35.053477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 [2024-11-28 18:31:35.056249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.379 starting I/O failed: -6 00:29:23.379 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 [2024-11-28 18:31:35.069749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.380 NVMe io qpair process completion error 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 [2024-11-28 18:31:35.071815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 [2024-11-28 18:31:35.074066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.380 starting I/O failed: -6 00:29:23.380 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 [2024-11-28 18:31:35.076806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 [2024-11-28 18:31:35.089424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.381 NVMe io qpair process completion error 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 Write completed with error (sct=0, sc=8) 00:29:23.381 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 [2024-11-28 18:31:35.091439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 [2024-11-28 18:31:35.093655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 [2024-11-28 18:31:35.096423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.382 Write completed with error (sct=0, sc=8) 00:29:23.382 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 [2024-11-28 18:31:35.105818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.383 NVMe io qpair process completion error 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 [2024-11-28 18:31:35.108006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 [2024-11-28 18:31:35.110038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.383 Write completed with error (sct=0, sc=8) 00:29:23.383 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 [2024-11-28 18:31:35.112930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 [2024-11-28 18:31:35.126996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.384 NVMe io qpair process completion error 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.384 starting I/O failed: -6 00:29:23.384 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 [2024-11-28 18:31:35.129223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 [2024-11-28 18:31:35.131467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.385 starting I/O failed: -6 00:29:23.385 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 [2024-11-28 18:31:35.134124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 [2024-11-28 18:31:35.146916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.386 NVMe io qpair process completion error 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 [2024-11-28 18:31:35.148998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 Write completed with error (sct=0, sc=8) 00:29:23.386 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 [2024-11-28 18:31:35.151163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 [2024-11-28 18:31:35.153913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.387 Write completed with error (sct=0, sc=8) 00:29:23.387 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 [2024-11-28 18:31:35.169694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.388 NVMe io qpair process completion error 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 Write completed with error (sct=0, sc=8) 00:29:23.388 starting I/O failed: -6 00:29:23.388 ================================================================= 00:29:23.388 ==2566887==ERROR: AddressSanitizer: heap-use-after-free on address 0x61200002b588 at pc 0x7fedca0009e8 bp 0x7ffe72d05020 sp 0x7ffe72d05018 00:29:23.388 READ of size 8 at 0x61200002b588 thread T0 00:29:23.388 #0 0x7fedca0009e7 in posix_sock_group_impl_poll /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:2069 00:29:23.389 #1 0x7fedc9520f9e in sock_group_impl_poll_count /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:757 00:29:23.389 #2 0x7fedc95215cc in spdk_sock_group_poll_count /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:791 00:29:23.389 #3 0x7fedc9520d18 in spdk_sock_group_poll /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:742 00:29:23.389 #4 0x7fedc9941a01 in nvme_tcp_poll_group_process_completions /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:2829 00:29:23.389 #5 0x7fedc98fca2e in nvme_transport_poll_group_process_completions /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_transport.c:780 00:29:23.389 #6 0x7fedc99647b9 in spdk_nvme_poll_group_process_completions /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_poll_group.c:350 00:29:23.389 #7 0x40f3d1 in nvme_check_io /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:963 00:29:23.389 #8 0x41e133 in work_fn /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1792 00:29:23.389 #9 0x42e805 in main /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:3376 00:29:23.389 #10 0x7fedc74c5149 in __libc_start_call_main (/usr/lib64/libc.so.6+0x28149) (BuildId: 0d710e9d9dc10c500b8119c85da75004183618e2) 00:29:23.389 #11 0x7fedc74c520a in __libc_start_main_impl (/usr/lib64/libc.so.6+0x2820a) (BuildId: 0d710e9d9dc10c500b8119c85da75004183618e2) 00:29:23.389 #12 0x404c74 in _start (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf+0x404c74) (BuildId: 5793b47f54147c706ceb748484a9b20564eb8743) 00:29:23.389 00:29:23.389 0x61200002b588 is located 72 bytes inside of 320-byte region [0x61200002b540,0x61200002b680) 00:29:23.389 freed by thread T0 here: 00:29:23.389 #0 0x7fedca11cfc8 in __interceptor_free.part.0 (/usr/lib64/libasan.so.8+0xd7fc8) (BuildId: 2e1c50524ff1a2e7e73c4565b46f3f51892353ea) 00:29:23.389 #1 0x7fedc9ff572b in posix_sock_close /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:1249 00:29:23.389 #2 0x7fedc951d00f in spdk_sock_close /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:492 00:29:23.389 #3 0x7fedc9fe772d in spdk_sock_abort_requests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/sock.h:264 00:29:23.389 #4 0x7fedc9ffc461 in posix_sock_writev_async /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:1702 00:29:23.389 #5 0x7fedc951d9c6 in spdk_sock_writev_async /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:538 00:29:23.389 #6 0x7fedc9919613 in _tcp_write_pdu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:516 00:29:23.389 #7 0x7fedc9919fae in tcp_write_pdu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:559 00:29:23.389 #8 0x7fedc991bf1e in pdu_compute_crc32 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:663 00:29:23.389 #9 0x7fedc991c843 in nvme_tcp_qpair_write_pdu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:686 00:29:23.389 #10 0x7fedc9921c86 in nvme_tcp_qpair_capsule_cmd_send /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:954 00:29:23.389 #11 0x7fedc9922b71 in nvme_tcp_qpair_submit_request /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:987 00:29:23.389 #12 0x7fedc98fa2b4 in nvme_transport_qpair_submit_request /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_transport.c:649 00:29:23.389 #13 0x7fedc98d6e6c in _nvme_qpair_submit_request /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_qpair.c:1051 00:29:23.389 #14 0x7fedc98d7963 in nvme_qpair_submit_request /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_qpair.c:1117 00:29:23.389 #15 0x7fedc988bcfc in spdk_nvme_ns_cmd_writev_with_md /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_ns_cmd.c:1133 00:29:23.389 #16 0x40efe8 in nvme_submit_io /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:936 00:29:23.389 #17 0x41817b in submit_single_io /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1513 00:29:23.389 #18 0x419b1d in task_complete /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1578 00:29:23.389 #19 0x41a683 in io_complete /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1606 00:29:23.389 #20 0x7fedc9907c77 in nvme_complete_request /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_internal.h:1537 00:29:23.389 #21 0x7fedc9924494 in nvme_tcp_req_complete /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:1037 00:29:23.389 #22 0x7fedc9924859 in nvme_tcp_qpair_abort_reqs /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:1058 00:29:23.389 #23 0x7fedc99166e0 in nvme_tcp_ctrlr_disconnect_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:371 00:29:23.389 #24 0x7fedc98f919a in nvme_transport_ctrlr_disconnect_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_transport.c:567 00:29:23.389 #25 0x7fedc9917d2e in pdu_write_done /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:467 00:29:23.389 #26 0x7fedc9fe6f54 in spdk_sock_abort_requests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/sock.h:245 00:29:23.389 #27 0x7fedca00092e in posix_sock_group_impl_poll /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:2072 00:29:23.389 #28 0x7fedc9520f9e in sock_group_impl_poll_count /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:757 00:29:23.389 #29 0x7fedc95215cc in spdk_sock_group_poll_count /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:791 00:29:23.389 00:29:23.389 previously allocated by thread T0 here: 00:29:23.389 #0 0x7fedca11dcd7 in calloc (/usr/lib64/libasan.so.8+0xd8cd7) (BuildId: 2e1c50524ff1a2e7e73c4565b46f3f51892353ea) 00:29:23.389 #1 0x7fedc9fecf2c in posix_sock_alloc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:464 00:29:23.389 #2 0x7fedc9ff37e2 in posix_sock_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:1100 00:29:23.389 #3 0x7fedc9ff3cb7 in posix_sock_connect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:1130 00:29:23.389 #4 0x7fedc951b2af in spdk_sock_connect_ext /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/sock/sock.c:375 00:29:23.389 #5 0x7fedc993af2e in nvme_tcp_qpair_connect_sock /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:2286 00:29:23.389 #6 0x7fedc993c94f in nvme_tcp_ctrlr_create_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:2467 00:29:23.389 #7 0x7fedc993cc0d in nvme_tcp_ctrlr_create_io_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_tcp.c:2480 00:29:23.389 #8 0x7fedc98f7e64 in nvme_transport_ctrlr_create_io_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_transport.c:455 00:29:23.389 #9 0x7fedc9820d7a in nvme_ctrlr_create_io_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:392 00:29:23.389 #10 0x7fedc98225b6 in spdk_nvme_ctrlr_alloc_io_qpair /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:497 00:29:23.389 #11 0x4110bf in nvme_init_ns_worker_ctx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1043 00:29:23.389 #12 0x41acc9 in init_ns_worker_ctx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1641 00:29:23.389 #13 0x41c6da in work_fn /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:1734 00:29:23.389 #14 0x42e805 in main /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/spdk_nvme_perf/perf.c:3376 00:29:23.389 #15 0x7fedc74c5149 in __libc_start_call_main (/usr/lib64/libc.so.6+0x28149) (BuildId: 0d710e9d9dc10c500b8119c85da75004183618e2) 00:29:23.389 #16 0x7fedc74c520a in __libc_start_main_impl (/usr/lib64/libc.so.6+0x2820a) (BuildId: 0d710e9d9dc10c500b8119c85da75004183618e2) 00:29:23.389 #17 0x404c74 in _start (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf+0x404c74) (BuildId: 5793b47f54147c706ceb748484a9b20564eb8743) 00:29:23.389 00:29:23.389 SUMMARY: AddressSanitizer: heap-use-after-free /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/module/sock/posix/posix.c:2069 in posix_sock_group_impl_poll 00:29:23.389 Shadow bytes around the buggy address: 00:29:23.389 0x61200002b300: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b380: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b400: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b480: fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa 00:29:23.389 0x61200002b500: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 00:29:23.389 =>0x61200002b580: fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b600: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b680: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b700: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 00:29:23.389 0x61200002b780: fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa 00:29:23.389 0x61200002b800: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 00:29:23.389 Shadow byte legend (one shadow byte represents 8 application bytes): 00:29:23.389 Addressable: 00 00:29:23.389 Partially addressable: 01 02 03 04 05 06 07 00:29:23.389 Heap left redzone: fa 00:29:23.389 Freed heap region: fd 00:29:23.389 Stack left redzone: f1 00:29:23.389 Stack mid redzone: f2 00:29:23.389 Stack right redzone: f3 00:29:23.389 Stack after return: f5 00:29:23.389 Stack use after scope: f8 00:29:23.389 Global redzone: f9 00:29:23.389 Global init order: f6 00:29:23.389 Poisoned by user: f7 00:29:23.389 Container overflow: fc 00:29:23.389 Array cookie: ac 00:29:23.389 Intra object redzone: bb 00:29:23.389 ASan internal: fe 00:29:23.389 Left alloca redzone: ca 00:29:23.389 Right alloca redzone: cb 00:29:23.389 ==2566887==ABORTING 00:29:25.918 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2566887 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2566887 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:26.853 18:31:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2566887 00:29:28.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 655: 2566887 Aborted (core dumped) $rootdir/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r "trtype:$TEST_TRANSPORT adrfam:IPV4 traddr:$NVMF_FIRST_TARGET_IP trsvcid:$NVMF_PORT" -P 4 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=134 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@664 -- # es=6 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@671 -- # es=0 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # trap - ERR 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # print_backtrace 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1159 -- # args=('2566887' 'wait' 'nvmf_shutdown_tc4' 'nvmf_shutdown_tc4' '--transport=tcp') 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1159 -- # local args 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1161 -- # xtrace_disable 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:28.751 ========== Backtrace start: ========== 00:29:28.751 00:29:28.751 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:679 -> NOT(["wait"],["2566887"]) 00:29:28.751 ... 00:29:28.751 674 elif [[ -n ${EXIT_STATUS:-} ]] && ((es != EXIT_STATUS)); then 00:29:28.751 675 es=0 00:29:28.751 676 fi 00:29:28.751 677 00:29:28.751 678 # invert error code of any command and also trigger ERR on 0 (unlike bash ! prefix) 00:29:28.751 => 679 ((!es == 0)) 00:29:28.751 680 } 00:29:28.751 681 00:29:28.751 682 function timing() { 00:29:28.751 683 direction="$1" 00:29:28.751 684 testname="$2" 00:29:28.751 ... 00:29:28.751 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh:158 -> nvmf_shutdown_tc4([]) 00:29:28.751 ... 00:29:28.751 153 00:29:28.751 154 # Kill the target half way through 00:29:28.751 155 killprocess $nvmfpid 00:29:28.751 156 sleep 1 00:29:28.751 157 # Due to IOs are completed with errors, perf exits with bad status 00:29:28.751 => 158 NOT wait $perfpid 00:29:28.751 159 stoptarget 00:29:28.751 160 } 00:29:28.751 161 00:29:28.751 162 run_test "nvmf_shutdown_tc1" nvmf_shutdown_tc1 00:29:28.751 163 run_test "nvmf_shutdown_tc2" nvmf_shutdown_tc2 00:29:28.751 ... 00:29:28.751 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_shutdown_tc4"],["nvmf_shutdown_tc4"]) 00:29:28.751 ... 00:29:28.751 1124 timing_enter $test_name 00:29:28.751 1125 echo "************************************" 00:29:28.751 1126 echo "START TEST $test_name" 00:29:28.751 1127 echo "************************************" 00:29:28.751 1128 xtrace_restore 00:29:28.751 1129 time "$@" 00:29:28.751 1130 xtrace_disable 00:29:28.751 1131 echo "************************************" 00:29:28.751 1132 echo "END TEST $test_name" 00:29:28.751 1133 echo "************************************" 00:29:28.751 1134 timing_exit $test_name 00:29:28.751 ... 00:29:28.751 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh:167 -> main(["--transport=tcp"]) 00:29:28.751 ... 00:29:28.751 162 run_test "nvmf_shutdown_tc1" nvmf_shutdown_tc1 00:29:28.751 163 run_test "nvmf_shutdown_tc2" nvmf_shutdown_tc2 00:29:28.751 164 run_test "nvmf_shutdown_tc3" nvmf_shutdown_tc3 00:29:28.751 165 # Temporarily disable on e810 due to issue #3523 00:29:28.751 166 if ! [[ "$SPDK_TEST_NVMF_NICS" == "e810" && "$TEST_TRANSPORT" == "rdma" ]]; then 00:29:28.751 => 167 run_test "nvmf_shutdown_tc4" nvmf_shutdown_tc4 00:29:28.751 168 fi 00:29:28.751 169 00:29:28.751 170 trap - SIGINT SIGTERM EXIT 00:29:28.751 ... 00:29:28.751 00:29:28.751 ========== Backtrace end ========== 00:29:28.751 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1198 -- # return 0 00:29:28.752 00:29:28.752 real 0m13.239s 00:29:28.752 user 0m36.977s 00:29:28.752 sys 0m5.473s 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # process_shm --id 0 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@812 -- # type=--id 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@813 -- # id=0 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:28.752 18:31:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:28.752 nvmf_trace.0 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@827 -- # return 0 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # kill -9 2566887 00:29:29.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 1: kill: (2566887) - No such process 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # true 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # nvmftestfini 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.009 rmmod nvme_tcp 00:29:29.009 rmmod nvme_fabrics 00:29:29.009 rmmod nvme_keyring 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2566571 ']' 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2566571 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2566571 ']' 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2566571 00:29:29.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2566571) - No such process 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2566571 is not found' 00:29:29.009 Process with pid 2566571 is not found 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.009 18:31:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1 -- # exit 1 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # trap - ERR 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # print_backtrace 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1159 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh' 'nvmf_shutdown' '--transport=tcp') 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1159 -- # local args 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1161 -- # xtrace_disable 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:31.541 ========== Backtrace start: ========== 00:29:31.541 00:29:31.541 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_shutdown"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh"],["--transport=tcp"]) 00:29:31.541 ... 00:29:31.541 1124 timing_enter $test_name 00:29:31.541 1125 echo "************************************" 00:29:31.541 1126 echo "START TEST $test_name" 00:29:31.541 1127 echo "************************************" 00:29:31.541 1128 xtrace_restore 00:29:31.541 1129 time "$@" 00:29:31.541 1130 xtrace_disable 00:29:31.541 1131 echo "************************************" 00:29:31.541 1132 echo "END TEST $test_name" 00:29:31.541 1133 echo "************************************" 00:29:31.541 1134 timing_exit $test_name 00:29:31.541 ... 00:29:31.541 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh:65 -> main(["--transport=tcp"]) 00:29:31.541 ... 00:29:31.541 60 elif [[ $SPDK_TEST_NVMF_TRANSPORT == "rdma" ]]; then 00:29:31.541 61 # Disabled due to https://github.com/spdk/spdk/issues/3345 00:29:31.541 62 # run_test "nvmf_device_removal" test/nvmf/target/device_removal.sh "${TEST_ARGS[@]}" 00:29:31.541 63 run_test "nvmf_srq_overwhelm" "$rootdir/test/nvmf/target/srq_overwhelm.sh" "${TEST_ARGS[@]}" 00:29:31.541 64 fi 00:29:31.541 => 65 run_test "nvmf_shutdown" $rootdir/test/nvmf/target/shutdown.sh "${TEST_ARGS[@]}" 00:29:31.541 66 fi 00:29:31.541 67 run_test "nvmf_nsid" "$rootdir/test/nvmf/target/nsid.sh" "${TEST_ARGS[@]}" 00:29:31.541 68 00:29:31.541 69 trap - SIGINT SIGTERM EXIT 00:29:31.541 ... 00:29:31.541 00:29:31.541 ========== Backtrace end ========== 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1198 -- # return 0 00:29:31.541 00:29:31.541 real 0m57.626s 00:29:31.541 user 2m51.366s 00:29:31.541 sys 0m13.777s 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1 -- # exit 1 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # trap - ERR 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # print_backtrace 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1159 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh' 'nvmf_target_extra' '--transport=tcp') 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1159 -- # local args 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1161 -- # xtrace_disable 00:29:31.541 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:31.541 ========== Backtrace start: ========== 00:29:31.541 00:29:31.541 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_target_extra"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh"],["--transport=tcp"]) 00:29:31.541 ... 00:29:31.541 1124 timing_enter $test_name 00:29:31.541 1125 echo "************************************" 00:29:31.541 1126 echo "START TEST $test_name" 00:29:31.541 1127 echo "************************************" 00:29:31.541 1128 xtrace_restore 00:29:31.541 1129 time "$@" 00:29:31.541 1130 xtrace_disable 00:29:31.541 1131 echo "************************************" 00:29:31.541 1132 echo "END TEST $test_name" 00:29:31.541 1133 echo "************************************" 00:29:31.542 1134 timing_exit $test_name 00:29:31.542 ... 00:29:31.542 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh:15 -> main(["--transport=tcp"]) 00:29:31.542 ... 00:29:31.542 10 if [ ! $(uname -s) = Linux ]; then 00:29:31.542 11 exit 0 00:29:31.542 12 fi 00:29:31.542 13 00:29:31.542 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 => 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 17 00:29:31.542 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:29:31.542 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:29:31.542 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:29:31.542 ... 00:29:31.542 00:29:31.542 ========== Backtrace end ========== 00:29:31.542 18:31:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1198 -- # return 0 00:29:31.542 00:29:31.542 real 18m25.850s 00:29:31.542 user 50m51.461s 00:29:31.542 sys 3m32.343s 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1129 -- # trap - ERR 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1129 -- # print_backtrace 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1159 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1159 -- # local args 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1161 -- # xtrace_disable 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:31.542 ========== Backtrace start: ========== 00:29:31.542 00:29:31.542 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_tcp"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:29:31.542 ... 00:29:31.542 1124 timing_enter $test_name 00:29:31.542 1125 echo "************************************" 00:29:31.542 1126 echo "START TEST $test_name" 00:29:31.542 1127 echo "************************************" 00:29:31.542 1128 xtrace_restore 00:29:31.542 1129 time "$@" 00:29:31.542 1130 xtrace_disable 00:29:31.542 1131 echo "************************************" 00:29:31.542 1132 echo "END TEST $test_name" 00:29:31.542 1133 echo "************************************" 00:29:31.542 1134 timing_exit $test_name 00:29:31.542 ... 00:29:31.542 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh:284 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:29:31.542 ... 00:29:31.542 279 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:29:31.542 280 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:29:31.542 281 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 282 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 283 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:29:31.542 => 284 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 285 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:29:31.542 286 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 287 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:29:31.542 288 fi 00:29:31.542 289 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:29:31.542 ... 00:29:31.542 00:29:31.542 ========== Backtrace end ========== 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1198 -- # return 0 00:29:31.542 00:29:31.542 real 22m58.008s 00:29:31.542 user 62m47.103s 00:29:31.542 sys 4m42.561s 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1396 -- # local autotest_es=1 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:31.542 18:31:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.579 ##### CORE BT spdk_nvme_perf_2566887.core.bt.txt ##### 00:29:41.579 00:29:41.579 gdb: warning: Couldn't determine a path for the index cache directory. 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_0 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_1 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_2 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_3 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_4 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_5 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_6 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_9 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_10 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_11 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_12 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_13 (deleted) during file-backed mapping note processing 00:29:41.579 00:29:41.579 warning: Can't open file /dev/hugepages/spdk_pid2566887map_14 (deleted) during file-backed mapping note processing 00:29:41.579 [New LWP 2566887] 00:29:41.579 [New LWP 2566896] 00:29:41.579 [New LWP 2566890] 00:29:41.579 [Thread debugging using libthread_db enabled] 00:29:41.579 Using host libthread_db library "/usr/lib64/libthread_db.so.1". 00:29:41.579 Core was generated by `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1'. 00:29:41.579 Program terminated with signal SIGABRT, Aborted. 00:29:41.579 #0 0x00007fedc752d834 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:29:41.579 [Current thread is 1 (Thread 0x7fedc6a39a40 (LWP 2566887))] 00:29:41.579 00:29:41.579 Thread 3 (Thread 0x7fedc36006c0 (LWP 2566890)): 00:29:41.579 #0 0x00007fedc75b2e62 in epoll_wait () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #1 0x00007fedc8c8e9a0 in eal_intr_handle_interrupts (pfd=5, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:29:41.579 events = {{events = 0, data = {ptr = 0x0, fd = 0, u32 = 0, u64 = 0}}} 00:29:41.579 nfds = 0 00:29:41.579 #2 0x00007fedc8c8ee8a in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:29:41.579 pipe_event = {events = 3, data = {ptr = 0x3, fd = 3, u32 = 3, u64 = 3}} 00:29:41.579 src = 0x0 00:29:41.579 numfds = 1 00:29:41.579 pfd = 5 00:29:41.579 __func__ = "eal_intr_thread_main" 00:29:41.579 #3 0x00007fedc8c3065e in control_thread_start (arg=0x60300002d0a0) at ../lib/eal/common/eal_common_thread.c:282 00:29:41.579 params = 0x60300002d0a0 00:29:41.579 start_arg = 0x0 00:29:41.579 start_routine = 0x7fedc8c8ea74 00:29:41.579 #4 0x00007fedc8c78a91 in thread_start_wrapper (arg=0x7fedc4b094a0) at ../lib/eal/unix/rte_thread.c:114 00:29:41.579 ctx = 0x7fedc4b094a0 00:29:41.579 thread_func = 0x7fedc8c305c5 00:29:41.579 thread_args = 0x60300002d0a0 00:29:41.579 ret = 0 00:29:41.579 #5 0x00007fedc752b897 in start_thread () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #6 0x00007fedc75b2a5c in clone3 () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 00:29:41.579 Thread 2 (Thread 0x7fedc10006c0 (LWP 2566896)): 00:29:41.579 #0 0x00007fedc7576163 in clock_nanosleep@GLIBC_2.2.5 () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #1 0x00007fedc7588c97 in nanosleep () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #2 0x00007fedc759a5d3 in sleep () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #3 0x000000000042d547 in nvme_poll_ctrlrs (arg=0x0) at perf.c:3219 00:29:41.579 entry = 0x0 00:29:41.579 oldstate = 1 00:29:41.579 rc = 0 00:29:41.579 #4 0x00007fedc752b897 in start_thread () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #5 0x00007fedc75b2a5c in clone3 () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 00:29:41.579 Thread 1 (Thread 0x7fedc6a39a40 (LWP 2566887)): 00:29:41.579 #0 0x00007fedc752d834 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #1 0x00007fedc74db8ee in raise () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #2 0x00007fedc74c38ff in abort () from /usr/lib64/libc.so.6 00:29:41.579 No symbol table info available. 00:29:41.579 #3 0x00007fedca13cf0f in __sanitizer::Abort() () from /usr/lib64/libasan.so.8 00:29:41.579 No symbol table info available. 00:29:41.579 #4 0x00007fedca14c401 in __sanitizer::Die() () from /usr/lib64/libasan.so.8 00:29:41.579 No symbol table info available. 00:29:41.579 #5 0x00007fedca127f81 in __asan::ScopedInErrorReport::~ScopedInErrorReport() () from /usr/lib64/libasan.so.8 00:29:41.579 No symbol table info available. 00:29:41.579 #6 0x00007fedca127573 in __asan::ReportGenericError(unsigned long, unsigned long, unsigned long, unsigned long, bool, unsigned long, unsigned int, bool) [clone .part.0] () from /usr/lib64/libasan.so.8 00:29:41.579 No symbol table info available. 00:29:41.579 #7 0x00007fedca1286fc in __asan_report_load8 () from /usr/lib64/libasan.so.8 00:29:41.579 No symbol table info available. 00:29:41.579 #8 0x00007fedca0009e8 in posix_sock_group_impl_poll (_group=0x608000005120, max_events=32, socks=0x7fedc4bee620) at posix.c:2069 00:29:41.579 group = 0x608000005120 00:29:41.579 sock = 0x61200002b540 00:29:41.579 tmp = 0x61200002b540 00:29:41.579 num_events = 1926254832 00:29:41.579 i = 32766 00:29:41.579 rc = -1 00:29:41.579 psock = 0x0 00:29:41.579 ptmp = 0x7ffe72d05220 00:29:41.579 events = {{events = 1, data = {ptr = 0x61200001bf40, fd = 114496, u32 = 114496, u64 = 106790066962240}}, {events = 1, data = {ptr = 0x61200001c540, fd = 116032, u32 = 116032, u64 = 106790066963776}}, {events = 1, data = {ptr = 0x61200001c840, fd = 116800, u32 = 116800, u64 = 106790066964544}}, {events = 1, data = {ptr = 0x61200001c240, fd = 115264, u32 = 115264, u64 = 106790066963008}}, {events = 0, data = {ptr = 0x0, fd = 0, u32 = 0, u64 = 0}} } 00:29:41.579 last_placement_id = -1 00:29:41.579 __PRETTY_FUNCTION__ = "posix_sock_group_impl_poll" 00:29:41.580 #9 0x00007fedc9520f9f in sock_group_impl_poll_count (group_impl=0x608000005120, group=0x604000002f90, max_events=32) at sock.c:757 00:29:41.580 socks = {0x61200002b840, 0x61200002be40, 0x61200002bb40, 0x61200002c140, 0x0 } 00:29:41.580 num_events = 4 00:29:41.580 i = 4 00:29:41.580 __PRETTY_FUNCTION__ = "sock_group_impl_poll_count" 00:29:41.580 #10 0x00007fedc95215cd in spdk_sock_group_poll_count (group=0x604000002f90, max_events=32) at sock.c:791 00:29:41.580 group_impl = 0x608000005120 00:29:41.580 rc = 0 00:29:41.580 num_events = 0 00:29:41.580 __func__ = "spdk_sock_group_poll_count" 00:29:41.580 #11 0x00007fedc9520d19 in spdk_sock_group_poll (group=0x604000002f90) at sock.c:742 00:29:41.580 No locals. 00:29:41.580 #12 0x00007fedc9941a02 in nvme_tcp_poll_group_process_completions (tgroup=0x60e000004640, completions_per_qpair=0, disconnected_qpair_cb=0x40f077 ) at nvme_tcp.c:2829 00:29:41.580 group = 0x60e000004640 00:29:41.580 qpair = 0x0 00:29:41.580 tmp_qpair = 0x0 00:29:41.580 tqpair = 0x0 00:29:41.580 tmp_tqpair = 0x2000047a7000 00:29:41.580 num_events = 0 00:29:41.580 #13 0x00007fedc98fca2f in nvme_transport_poll_group_process_completions (tgroup=0x60e000004640, completions_per_qpair=0, disconnected_qpair_cb=0x40f077 ) at nvme_transport.c:780 00:29:41.580 No locals. 00:29:41.580 #14 0x00007fedc99647ba in spdk_nvme_poll_group_process_completions (group=0x608000005020, completions_per_qpair=0, disconnected_qpair_cb=0x40f077 ) at nvme_poll_group.c:350 00:29:41.580 tgroup = 0x60e000004640 00:29:41.580 local_completions = 0 00:29:41.580 error_reason = 0 00:29:41.580 num_completions = 0 00:29:41.580 __PRETTY_FUNCTION__ = "spdk_nvme_poll_group_process_completions" 00:29:41.580 #15 0x000000000040f3d2 in nvme_check_io (ns_ctx=0x611000015d00) at perf.c:963 00:29:41.580 rc = 140659255742361 00:29:41.580 #16 0x000000000041e134 in work_fn (arg=0x604000002250) at perf.c:1792 00:29:41.580 all_draining = true 00:29:41.580 tsc_start = 28495043238918963 00:29:41.580 tsc_end = 28495097238918963 00:29:41.580 tsc_current = 28495056117375729 00:29:41.580 tsc_next_print = 28495056738918963 00:29:41.580 worker = 0x604000002250 00:29:41.580 ns_ctx = 0x611000015d00 00:29:41.580 unfinished_ns_ctx = 1926256368 00:29:41.580 warmup = false 00:29:41.580 rc = -1 00:29:41.580 check_rc = 0 00:29:41.580 check_now = 28495056117390390 00:29:41.580 swap = {tqh_first = 0x0, tqh_last = 0x7fedc4848ea0} 00:29:41.580 task = 0x0 00:29:41.580 #17 0x000000000042e806 in main (argc=15, argv=0x7ffe72d05a08) at perf.c:3376 00:29:41.580 rc = 0 00:29:41.580 worker = 0x0 00:29:41.580 main_worker = 0x604000002250 00:29:41.580 ns_ctx = 0x7fedc952d860 00:29:41.580 opts = {name = 0x448560 "perf", core_mask = 0x7fedc90492e0 "0x1", lcore_map = 0x0, shm_id = -1, mem_channel = -1, main_core = -1, mem_size = -1, no_pci = true, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved = 0, num_pci_addr = 0, hugedir = 0x0, pci_blocked = 0x0, pci_allowed = 0x4a3000 , iova_mode = 0x0, base_virtaddr = 35184372088832, env_context = 0x0, vf_token = 0x0, opts_size = 128, enforce_numa = false, reserved2 = "\000\000\000\000\000\000"} 00:29:41.580 thread_id = 140659121981120 00:29:41.580 __PRETTY_FUNCTION__ = "main" 00:29:41.580 00:29:41.580 -- 00:29:42.975 INFO: APP EXITING 00:29:42.975 INFO: killing all VMs 00:29:42.975 INFO: killing vhost app 00:29:42.975 INFO: EXIT DONE 00:29:43.909 Waiting for block devices as requested 00:29:44.166 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:44.166 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:44.166 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:44.424 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:44.424 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:44.424 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:44.424 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:44.682 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:44.682 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:44.682 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:44.682 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:44.941 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:44.941 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:44.941 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:44.941 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:45.199 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:45.199 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:46.574 Cleaning 00:29:46.574 Removing: /var/run/dpdk/spdk0/config 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:46.574 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:46.574 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:46.574 Removing: /var/run/dpdk/spdk1/config 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:46.574 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:46.574 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:46.574 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:46.574 Removing: /var/run/dpdk/spdk2/config 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:46.574 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:46.574 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:46.574 Removing: /var/run/dpdk/spdk3/config 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:46.574 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:46.574 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:46.574 Removing: /var/run/dpdk/spdk4/config 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:46.574 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:46.574 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:46.574 Removing: /dev/shm/bdev_svc_trace.1 00:29:46.574 Removing: /dev/shm/nvmf_trace.0 00:29:46.574 Removing: /dev/shm/spdk_tgt_trace.pid2324095 00:29:46.574 Removing: /var/run/dpdk/spdk0 00:29:46.574 Removing: /var/run/dpdk/spdk1 00:29:46.574 Removing: /var/run/dpdk/spdk2 00:29:46.574 Removing: /var/run/dpdk/spdk3 00:29:46.574 Removing: /var/run/dpdk/spdk4 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2320566 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2321603 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2324095 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2324814 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2325769 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2326187 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2327171 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2327310 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2327839 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2329298 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2330479 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2331079 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2331674 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2332275 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2332758 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2333040 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2333197 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2333513 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2333968 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2336730 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2337293 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2337726 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2337983 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2339226 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2339365 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2340598 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2340853 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2341290 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2341437 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2341863 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2342006 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2343052 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2343326 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2343537 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2346173 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2349096 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2356963 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2357368 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2360161 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2360441 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2363358 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2367348 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2369716 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2377035 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2382669 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2384618 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2385420 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2396594 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2399157 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2456681 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2460144 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2464246 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2470473 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2500814 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2503991 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2505171 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2506627 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2506909 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2507189 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2507502 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2508419 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2509875 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2511270 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2511971 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2513874 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2514686 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2515497 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2518178 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2521859 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2521860 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2521861 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2524340 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2526784 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2530839 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2554936 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2558129 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2562188 00:29:46.574 Removing: /var/run/dpdk/spdk_pid2563782 00:29:46.832 Removing: /var/run/dpdk/spdk_pid2565400 00:29:46.832 Removing: /var/run/dpdk/spdk_pid2566887 00:29:46.832 Clean 00:29:48.728 18:32:00 nvmf_tcp -- common/autotest_common.sh@1453 -- # return 1 00:29:48.728 18:32:00 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:29:48.728 18:32:00 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:29:48.728 18:32:00 -- spdk/autorun.sh@27 -- $ trap - ERR 00:29:48.728 18:32:00 -- spdk/autorun.sh@27 -- $ print_backtrace 00:29:48.728 18:32:00 -- common/autotest_common.sh@1157 -- $ [[ ehxBET =~ e ]] 00:29:48.728 18:32:00 -- common/autotest_common.sh@1159 -- $ args=('/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:29:48.728 18:32:00 -- common/autotest_common.sh@1159 -- $ local args 00:29:48.728 18:32:00 -- common/autotest_common.sh@1161 -- $ xtrace_disable 00:29:48.728 18:32:00 -- common/autotest_common.sh@10 -- $ set +x 00:29:48.728 ========== Backtrace start: ========== 00:29:48.728 00:29:48.728 in spdk/autorun.sh:27 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:29:48.728 ... 00:29:48.728 22 trap 'timing_finish || exit 1' EXIT 00:29:48.728 23 00:29:48.728 24 # Runs agent scripts 00:29:48.728 25 $rootdir/autobuild.sh "$conf" 00:29:48.728 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:29:48.728 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:29:48.728 28 fi 00:29:48.728 ... 00:29:48.728 00:29:48.728 ========== Backtrace end ========== 00:29:48.728 18:32:00 -- common/autotest_common.sh@1198 -- $ return 0 00:29:48.728 18:32:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:48.728 18:32:00 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:29:48.728 18:32:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:48.728 18:32:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:48.728 18:32:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:48.738 [Pipeline] } 00:29:48.753 [Pipeline] // stage 00:29:48.759 [Pipeline] } 00:29:48.772 [Pipeline] // timeout 00:29:48.777 [Pipeline] } 00:29:48.781 ERROR: script returned exit code 1 00:29:48.781 Setting overall build result to FAILURE 00:29:48.792 [Pipeline] // catchError 00:29:48.796 [Pipeline] } 00:29:48.808 [Pipeline] // wrap 00:29:48.812 [Pipeline] } 00:29:48.822 [Pipeline] // catchError 00:29:48.828 [Pipeline] stage 00:29:48.829 [Pipeline] { (Epilogue) 00:29:48.837 [Pipeline] catchError 00:29:48.839 [Pipeline] { 00:29:48.847 [Pipeline] echo 00:29:48.848 Cleanup processes 00:29:48.852 [Pipeline] sh 00:29:49.130 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:49.130 2310184 sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732813496 00:29:49.130 2310232 bash /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732813496 00:29:49.130 2573720 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:49.141 [Pipeline] sh 00:29:49.419 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:49.419 ++ grep -v 'sudo pgrep' 00:29:49.419 ++ awk '{print $1}' 00:29:49.419 + sudo kill -9 2310184 2310232 00:29:49.429 [Pipeline] sh 00:29:49.709 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:54.983 [Pipeline] sh 00:29:55.268 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:55.268 Artifacts sizes are good 00:29:55.285 [Pipeline] archiveArtifacts 00:29:55.294 Archiving artifacts 00:29:56.639 [Pipeline] sh 00:29:56.921 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:56.934 [Pipeline] cleanWs 00:29:56.943 [WS-CLEANUP] Deleting project workspace... 00:29:56.943 [WS-CLEANUP] Deferred wipeout is used... 00:29:56.949 [WS-CLEANUP] done 00:29:56.951 [Pipeline] } 00:29:56.968 [Pipeline] // catchError 00:29:56.980 [Pipeline] echo 00:29:56.982 Tests finished with errors. Please check the logs for more info. 00:29:56.985 [Pipeline] echo 00:29:56.987 Execution node will be rebooted. 00:29:57.017 [Pipeline] build 00:29:57.020 Scheduling project: reset-job 00:29:57.034 [Pipeline] sh 00:29:57.314 + logger -p user.err -t JENKINS-CI 00:29:57.323 [Pipeline] } 00:29:57.337 [Pipeline] // stage 00:29:57.343 [Pipeline] } 00:29:57.357 [Pipeline] // node 00:29:57.363 [Pipeline] End of Pipeline 00:29:57.401 Finished: FAILURE